winterkoninkje: shadowcrane (clean) (Default)

Life’s been really hectic lately, but I’ve been getting (slowly) back into working on my Haskell packages. In particular, since the switch from darcs to github I’ve started getting more comments and feature requests, which is nice. Over the next half-year or so, here’s what I’ll be up to in my free time between work on the dissertation and work on Hakaru:

containers — I’ve been appointed one of the new co-maintainers of our favorite venerable library. I prolly won’t be doing any major work until autumn (as mentioned when I was appointed), but I’ve had a number of conversations with David Feuer about where to take things in terms of cleaning up some old maintenance cruft.

bytestring-trie — A few years back I started reimplementing my tries to use Bagwell’s Array Mapped Tries in lieu of Okasaki’s Big-Endian Patricia Tries, but then got stalled because life. I’ve started up on it again, and it’s just about ready to be released after a few more tweaks. Also, now that I’m working on it again I can finally clear out the backlog of API requests (sorry folks!).

exact-combinatorics — A user recently pointed me towards a new fast implementation of factorial making waves lately. It’s not clear just yet whether it’ll be faster than the current implementation, but should be easy enough to get going and run some benchmarks.

unification-fd — This one isn’t hacking so much as dissemination. I have a backlog of queries about why things are the way they are, which I need to address; and I’ve been meaning to continue the tutorial about how to use this library for your unification needs.

logfloat — We’ve been using this a lot in Hakaru, and there are a few performance tweaks I think I can add. The main optimization area is trying to minimize the conditionals for detecting edge cases. The biggest issue has just been coming up with some decent benchmarks. The problem, of course, is that most programs making use of logfloats do a lot of other work too so it can be tricky to detect the actual effect of changes. I think this is something Hakaru can help a lot with since it makes it easy to construct all sorts of new models.

winterkoninkje: shadowcrane (clean) (Default)

Edit 2015.10.29: Be sure to also read the followup post on the benefits and limitations of this approach compared to the usual untyped ABTs.

Earlier this year Neel Krishnaswami talked about abstract binding trees (ABTs) [part 1] [part 2]. IMO, the best way to think about ABTs is as a generalization of abstract syntax trees (ASTs), though this is not a perspective sanctioned by the CMUers I’ve talked to. CMUers oppose this way of phrasing things, in part, because the ABT libraries they’re familiar with make crucial use of the design pattern of two-level types; but I think the essential insights of ABTs and two-level types are quite different, and we ought to keep the benefits of these two techniques distinct.

Over the past year I’ve been working on the inferential language1 Hakaru, and in the new version of the compiler we’re using ABTs for our syntax trees. However, contrary to Neel’s stance against using strongly-typed internal representations for syntax, we extend the ABT approach to make use of GADTs to guarantee local well-typedness— since this in turn can be used to guarantee that program transformations are also well-typed. (If you don’t want those guarantees, then take a look at Jon Sterling’s abt library on Hackage2.) In this post I’m going to present a simplified version of our architecture, and then talk about some of the extra stuff bringing it closer to our production architecture.

First things first

Since we want everything to be well-typed, we first must introduce some universe, U, of all the types in our language. (In Haskell we can implement such a universe by using the -XDataKinds extension, so I’ll equivocate between calling U a “universe” vs a “kind”.) For the rest of this post it doesn’t actually matter what lives in that universe3, just so long as things match up when they need to. Since the choice of universe is irrelevant, we could abstract over U by turning on the -XPolyKinds extension; but I avoid doing so below, just to help keep things more concrete.

Implementing ASTs

The simplest way of thinking about well-typed ASTs is that they capture the set of terms generated by a (typed) signature; that is, the fixed point of some Σ [U] U . Unpacking the type for Σ, we have that every syntactic constructor sΣ is associated with some arity (the length of the list), each argument to s has some type in U (the elements of the list), and applying s to the right number of ASTs of the right types will generate a new AST with some type in U (the second argument to Σ).

To implement this fixed point we define an AST type which is parameterized by its signature. To ensure well-aritiedness (and well-typedness) of our ASTs with respect to that signature, we’ll need to introduce a helper type SArgs4. And to ensure that we obtain the least fixed-point of the signature, we’ll make everything strict.

infix  4 :$
infixr 5 :*

data SArgs  (U  )  [U]   where
    End   SArgs ast []

    (:*)  !(ast u)
          !(SArgs ast us)
          SArgs ast (u : us)

data AST  ([U]  U  )  U   where
    (:$)  !(σ us u)
          !(SArgs (AST σ) us)
          AST σ u

Implementing ABTs

The problem with ASTs is that they have no notion of variables, and thus have no notion of variable binding. Naively we could implement binders like lambda-abstraction by having something like λ Σ [u, v] (u :→ v) but then we’d need to do a post-hoc check to ensure that the first argument to λ is in fact a variable. To build that check into the datatype itself we’d have to move λ into the definition of AST (since the first argument is of type Variable u rather than AST Σ u). If lambda-abstraction were the only binder we had, that might not be so bad; but any real-world language has a plethora of binders, and this approach doesn’t scale.

The essential idea behind ABTs is to abstract over the notion of binding itself. Given a single uniform definition of what it means to be a binding form, we don’t have to worry about adding a bunch of ad-hoc constructors to our AST datatype. Moreover, we can then provide single uniform definitions for things which mess with variables and are homomorphic over the signature. Things like capture-avoiding substitution and providing a HOAS API for our first-order representation.

The crucial step is to adjust our notion of what a signature contains. The basic signatures used above only contained applicative forms; i.e., things we can apply to locally-closed terms; i.e., what are called “functors” in the logic programming community. For ABTs we’ll want to allow our signatures to include any generalized quantifier. That is, our signatures will now be of type Σ [[U] × U] U . Previously, the arguments were indexed by U; now, they’re indexed by [U] × U. The length of the list gives the number of variables being bound, the types in the list give the types of those variables, and the second component of the pair gives the type of the whole locally-open expression.

To implement this we need to extend our syntax tree to include variable bindings and variable uses:

data SArgs  ([U]  U  )  [[U] × U]   where
    End   SArgs abt []

    (:*)  !(abt vs u)
          !(SArgs abt vus)
          SArgs abt ((vs,u) : vus)

data ABT  ([[U] × U]  U  )  [U]  U   where
    (:$)  !(σ vus u)
          !(SArgs (ABT σ) vus)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Time for an example of how this all fits together. To add lambda-abstraction to our language we’d have λ Σ [([u],v)] (u :→ v): that is, the λ constructor takes a single argument which is a locally-open term, binding a single variable of type u, and whose body has type v. So given some x Variable u and e ABT Σ [] v we’d have the AST (λ :$ Bind x e :* End) ABT Σ [] (u :→ v).

“Local” vs “global” well-typedness

With the ABT definition above, every term of type ABT Σ vs u must be locally well-typed according to the signature Σ. I keep saying “locally” well-typed because we only actually keep track of local binding information. This is an intentional design decision. But only tracking local well-typedness does have some downsides.

So what are the downsides? Where could things go wrong? Given a locally-closed term (i.e., either Var x or f :$ e) any free variables that occur inside will not have their U-types tracked by Haskell’s type system. This introduces some room for the compiler writer to break the connection between the types of a variable’s binder and its use. That is, under the hood, every variable is represented by some unique identifier like an integer or a string. Integers and strings aren’t U-indexed Haskell types, thus it’s possible to construct a Variable u and a Variable v with the same unique identifier, even though u and v differ. We could then Bind the Variable u but Var the Variable v. In order to ensure global well-typedness we need to ensure this can’t happen.

One way is to keep track of global binding information, as we do in the paper presentation of languages. Unfortunately, to do this we’d need to teach Haskell’s typechecker about the structural rules of our language. Without a type-level implementation of sets/maps which respects all the axioms that sets/maps should, we’d be forced to do things like traverse our ASTs and rebuild them identically, but at different type indices. This is simply too hairy to stomach. Implementing the axioms ourselves is doubly so.

Or we could fake it, using unsafeCoerce to avoid the extraneous traversals or the complicated pattern matching on axioms. But doing this we’d erase all guarantees that adding global binding information has to offer.

A third approach, and the one we take in Hakaru, is compartmentalize the places where variables can be constructed. The variable generation code must be part of our trusted code base, but unlike the unsafeCoerce approach we can keep all the TCB code together in one spot rather than spread out across the whole compiler.

Stratifying our data types

The above definition of ABTs is a simplified version of what we actually use in Hakaru. For example, Hakaru has user-defined algebraic data types, so we also need case analysis on those data types. Alas, generic case analysis is not a generalized quantifier, thus we cannot implement it with (:$). We could consider just adding case analysis to the ABT definition, but then we’d start running into extensibility issues again. Instead, we can break the ABT type apart into two types: one for capturing variable uses and bindings, and the other for whatever syntax we can come up with. Thus,

data Syntax  ([[U] × U]  U  )  ([U]  U  )  U   where
    (:$)  !(σ vus u)
          !(SArgs abt vus)
          Syntax σ abt u

data ABT  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ (ABT σ) u)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Of course, since we’re going to be extending Syntax with all our language-specific details, there’s not a whole lot of benefit to parameterizing over σ. Thus, we can simplify the types considerably by just picking some concrete Σ to plug in for σ.

By breaking Syntax apart from ABT we can now extend our notion of syntax without worrying about the details of variable binding (which can be defined once and for all on ABT). But we could still run into extensibility issues. In particular, often we want to separate the fixed-point portion of recursive types from their generating functor so that we can do things like add annotations at every node in the recursive data type. A prime example of such annotations is keeping track of free variables, as in Neel’s original post. To allow this form of extensibility we need to break up the ABT type into two parts: the recursion, and the Syn/Var/Bind view of the ABT.

data ABT  ([U]  U  )  [U]  U   where
    Unview  !(View σ (ABT σ) vs u)  ABT σ vs u

view  ABT σ vs u  View σ (ABT σ) vs u
view (Unview e) = e

data View  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ abt u)
          View σ abt [] u

    Var   !(Variable v)
          View σ abt [] v

    Bind  !(Variable v)
          !(View σ abt vs u)
          View σ abt (v : vs) u

Now, to allow arbitrary annotations we’ll replace the data type ABT with an equivalent type class. Each instance of the ABT class defines some sort of annotations, and we can use the view and unview methods to move between the instance and the concrete View type.

There’s one last form of extensibility we may want to add. Using fixed point combinators gives us a way of describing complete trees. A different way of introducing recursion is with free monads. The free-monad combinator is just like the fixed-point combinator, except that we have an additional type parameter for metavariables and we have a data constructor for using those metavariables instead of requiring the recursion to ground out with a complete syntax tree. The reasons why this might be nice to do are beyond the scope of this post, but the point is we might want to do that so we need to split the ABT class into two parts: one for the recursion itself, and another for the annotations.

In the end, we have a four-level type: the Syntax, the View, the annotations, and the recursion.


[1] In the accepted/current parlance, Hakaru is a “probabilistic programming language”; but I and a number of other folks working on such languages have become disaffected with that term of late, since it’s not entirely clear what should and should not count as a “probabilistic” PL. Over the course of a number of discussions on the topic, I’ve settled on “inferential” PL as describing what is (or, at least what I find) interesting about “probabilistic” PL. I’ve been meaning to write a post about the subject, and hopefully this footnote will remind me to do so.

[2] N.B., the indexing used in that package is what we get if we erase/compactify the universe U. That is: the erasure of U is a singleton set; the erasure of [U] is isomorphic to the Peano numbers; the erasure of [[U] × U] is isomorphic to a list of Peano numbers; etc.

[3] Though at one point I assume we have functions, (:→), just for the sake of an example.

[4] Ideally we’d be able to flatten this type to avoid all the overhead of the linked list implementation. In fact, the entire AST node of (:$) together with its SArgs should be flattened. These nodes have the same general layout as the heap objects in the STG machine: a record with a pointer to the data constructor (i.e., element of the signature) followed by an appropriate number of arguments; and so in principle we ought to be able to implement them directly with a single STG heap object.

winterkoninkje: shadowcrane (clean) (Default)

Katie Miller is giving a talk about FP outreach and diversity at next month's Erlang User Conference. She sent a questionnaire to the Lambda Ladies mailing list about our experiences, and I thought I'd share my responses here.

What led you to pursue functional programming?

Curiosity. I was in grad school, working towards a masters in computer science. And I was particularly interested in programming languages, though I was only familiar with imperative and OO languages at that point. I saw a course on functional programming, so I signed up. Best CS decision I ever made.

What sort of perception did you have of functional programming before you learnt more about it? Was this a barrier? If so, how did you overcome this?

All I knew at the time was that it was some sort of paradigm completely different from imperative and OO. That's it really. This was before FP started becoming popular; so, long before Clojure or Scala were invented, and long before C++ considered adding lambdas/closures to the language. Even within the FP community, Haskell was still considered the new kid on the block (despite having been around for quite some time).

What were the challenges for becoming part of the FP community?

The main challenge was just in figuring out where the community was and how to take part. As I said, this was long before FP became popular. My first FP language was Haskell, but I'd learned it in that course on functional programming so I didn't really know what the community was like. It was a year or two after taking the class that I decided to start really using Haskell for projects. At the time I was taking part in the Perl community, so I thought I'd go searching for some Haskell mailing lists to join. That's when I found the firehose that is Haskell Cafe.

Why do you think women are underrepresented in FP, more so than in programming generally?

I think there are a number of reasons. One of the big ones is how academic the community is. I don't mean that in the way people usually do. I'm an academic, and I love it here! No, the problem is that this creates a huge selection bias. I only really found FP by stumbling into it, and I only stumbled into it because I had a number of supportive advisors who helped foster my interest in programming languages. By the point I found FP, many women would have already been filtered out. Just getting into and affording college is a huge thing, especially for women of color. Let alone making it through undergrad and then getting into a masters program in CS without a bachelor's in CS. Let alone ending up at a school that can offer good FP classes, and finding those supportive advisors to help you along and guide you in the right direction.

If my story is anything to go by, it takes a lot of privilege (and luck) just to get to the point where you discover FP. After that, then you add on all the issues about maintaining community involvement. Because the community is so academic, this heightens issues of impostor syndrome. (Even men are driven out of FP due to impostor syndrome!) And since FP tends to be sold in a hyper-intellectualized manner, this evokes the "math is hard" brand of anti-intellectualism. While this drives a lot of people away, I think it has a differentially powerful impact on women due to the way we gender the sciences. That is, FP propaganda has a habit of taking the things which cause women to be underrepresented in STEM generally, and then cranking them up to eleven.

Another issue, and one I haven't seen discussed very often, is the fact that many FP communities are also FOSS communities. Women are more underrepresented in FOSS than in other CS communities, so the fact that FP tends to be FOSS means that women will tend to be more underrepresented in FP than other CS communities.

What strategies do you think the community could employ to address this problem and improve the (gender, and other types of) diversity in FP?

Setting up communities which aren't so hyper-intellectualized is a big step. Getting rid of all that propaganda and just treating FP like any other paradigm will do a lot to mitigate the impact of impostor syndrome and "math is hard" anti-intellectualism. It's no panacea, but it's probably the easiest thing we can tackle. Addressing the systemic issues is a lot harder.

Do you think it helps to have a women's group like Lambda Ladies? How has it been helpful for you?

I do think it helps. Over the years I've seen a lot of women come and go (mostly go) on Haskell Cafe. Overall I feel like the Cafe is one of the safer and more welcoming communities, but we've still had our misogynistic flareups. And after each one, I've watched the subsequent evacuation as women no longer feel quite so safe or welcome. By offering a safe space, women's groups are an important form of resistance against this sort of problem. It's a space where you don't always have to be on your guard against harassment. It's a space where you don't have to worry about how you present yourself, don't have to worry that femininity will undermine your credibility, don't have to worry about how asking "stupid" questions will affect the way people think of women as a whole.

Also —we don't really do this on LL, but— women's groups can provide a safe environment for venting about the sorts of problems we encounter online, in the work force, etc. Venting is always a tricky topic, but I think the importance of venting is grossly underrated. Whatever community you're a part of, bad things are going to come up sooner or later. When that happens, having a side community where you can let off steam or discuss why the particular thing is problematic is an important way to deal with the emotional fallout of these bad things. Once you've dealt with it, you can return to the main community; but if you have nowhere to deal with it, then things build up and up until you're just done and you quit the community.

In addition to providing a safe space, women's groups also serve an important role regarding announcements for jobs, conferences, etc. The announcements we get are tailored for women and so include important details like how welcoming they are of women, whether they can offer travel expenses, whether they offer child care, and so on.

For me, LL has been helpful mainly as a place to witness women in FP. Just seeing other women is energizing, and keeps me interested in being out there as part of the general FP community. The bit about announcements has also been helpful.

winterkoninkje: shadowcrane (clean) (Default)

So, I just encountered a most delicious type the other day:

class Finite a where
    assemble :: Applicative f => (a -> f b) -> f (a -> b)

What's so nice about it is that the only way you can implement it is if the type a is in fact finite. (But see the notes.) So the questions are:

  • Can you see why?
  • Can you figure out how to implement it for some chosen finite type?
  • Can you figure out how to implement it in general, given a list of all the values? (you may assume Eq a for this one)
  • Can you figure out how to get a list of all the values, given some arbitrary implementation of assemble?
A trivial note ) A big note, also a hint perhaps )

winterkoninkje: shadowcrane (clean) (Default)

I've been working on a tagging library (and executable) for a bit over a year now. When the project started I had the advantage of being able to choose the language to do it in. Naturally I chose Haskell. There are numerous reasons for this decision, some of which have been derided as "philosophical concerns". Certainly some of the reasons why Haskell is superior to other languages do border on the philosophical. Y'know, silly little things like the belief that type systems should prevent errors rather than encouraging them to proliferate. I'm sure you've heard the arguments before. They're good arguments, and maybe they'll convince you to try out Haskell in your basement. But in many so-called "enterprise" settings, anything that even smells like it might have basis in theoretical fact is automatically wrong or irrelevant; whatever you do in the privacy of your basement is your own business, but heaven forbid it have any influence on how decisions are made in the workplace! So, here is a short list of entirely pragmatic, practical, and non-theoretical reasons why Haskell is superior to Java for implementing enterprise programs. More specifically, these are reasons why Haskell is superior for my project. Perhaps they don't matter for your project, or perhaps they'll be enough to convince your boss to let you give Haskell a try. Because design decisions are often project-specific, each point explains why they matter for Posta in particular.

  • Haskell has powerful frameworks for defining modular, high-performance, non-trivial parsers (e.g., Attoparsec). In natural language processing (NLP), just like system administration, over half of the work you do involves dealing with a handful of different ad-hoc poorly defined file formats. Reading them; generating them; converting from one format to another; etc. Because every one of these formats grew out of a slow accretion of features for one particular project, they're riddled with inconsistencies, idiosyncratic magic values, corner cases, and non-context-free bits that require special handling. In Java the premiere tool (so far as I know) for defining parsers is JavaCC. (Like the C tools lex and yacc, JavaCC uses its own special syntax and requires a preprocessor, whereas Attoparsec and the like don't. However, this may be a "philosophical" issue.) However, as of last time I used it, JavaCC is designed for dealing with nice clean grammars used by programming languages and it doesn't handle inconsistent and irregular grammars very well.
  • Posta uses a system of coroutines (called "iteratees") in order to lazily stream data from disk, through the parsers, and into the core algorithms, all while maintaining guarantees about how long resources (e.g., file handles, memory) are held for. This allows handling large files, because we don't need to keep the whole file in memory at once, either in its raw form or in the AST generated by parsing it. For modern enterprise-scale NLP, dealing with gigabyte-sized files is a requirement; because many NLP projects are not enterprise-scale, you get to spend extra time chopping up and reformatting files to fit their limitations. Last time I used JavaCC it did not support incremental parsing, and according to the advertised features it still doesn't. In addition, implementing coroutines is problematic because Java's security model precludes simple things like tail-call optimization--- meaning that you can only support this kind of streaming when the control flow is simple enough to avoid stack overflows.
  • Haskell has awesome support for parallelism. One version, called STM, provides composeable atomic blocks (which matches the way we naturally think about parallelism) combined with lightweight threads (which make it cheap and easy). Java has no support for STM. I am unaware of any support for lightweight threads in Java. The only parallelism I'm aware of in Java is the monitor-style lock-based system with OS threads. As with all lock-based systems, it is non-composeable and difficult to get right; and as with using OS threads anywhere else, there is high overhead which removes the benefits of parallelizing many programs.
  • Posta makes extensive use of partial evaluation for improving performance; e.g., lifting computations out of loops. When doing NLP you are often dealing with triply-nested loops, so loop-invariant code motion is essential for performance. In my benchmarks, partial evaluation reduces the total running time by 10%. If raw numbers don't convince you: using partial evaluation allows us to keep the code legible, concise, modular, and maintainable. The primary use of partial evaluation is in a combinator library defining numerous smoothing methods for probability distributions; the results of which are called from within those triply-nested loops. Without partial evaluation, the only way to get performant code is to write a specialized version of the triply-nested loop for every different smoothing method you want to support. That means duplicating the core algorithm and a lot of tricky math, many times over. There's no way to implement this use of partial evaluation in anything resembling idiomatic Java.
  • Posta uses an implementation of persistent asymptotically optimal priority queues which come with proofs of correctness. A persistent PQ is necessary for one of the tagger's core algorithms. Since the PQ methods are called from within nested loops, performance is important. Since we're dealing with giga-scale data, asymptotics are important. A log factor here or there means more than a 10% increase in total running time. In Java there's java.util.PriorityQueue but it has inferior asymptotic performance guarantees and is neither persistent nor synchronized. I'm sure there are other PQ libraries out there, but I doubt anyone's implemented the exact version we need and shown their implementation to be correct.

I'll admit I'm not up to date on state-of-the-art Java, and I'd love to be proven wrong about these things being unavailable. But a couple years ago when I returned to Java after a long time away, I learned that all the hype I'd heard about Java improving over the preceding decade was just that: hype. I have been disappointed every time I hoped Java has some trivial thing. The most recent one I've run into is Java's complete refusal to believe in the existence of IPC (no, not RPC), but that's hardly the tip of the iceberg.

winterkoninkje: shadowcrane (clean) (Default)

Immutability and Blocks, Lambdas and Closures (André Pang): Mr. Python, please call the office. Your abstractions are leaking.

winterkoninkje: shadowcrane (clean) (Default)

Last week I gave a debugging problem. Well, now it's time for the big reveal. If you'd like to try your hand at it, you should follow that link and stop reading this post. It took me a couple weeks to figure this one out. Given the nature of the symptoms (failure only on messages of lengths 255 and 383) it was pretty clear that this was some obscure and annoying bug, like an off-by-one error. I read through the relevant code for the whole communication stack (Haskell protobufs, Haskell FIFO IO, Java FIFO IO, Java protobufs) with a inkling of what the problem might be, but I still missed it the first time through.

In the hints I suggested that length 511 should also be problematic, and that we were using the length-delimited version of protobufs. For those who aren't familiar with protobufs there are only two relevant details of the encoding. The first is that length-delimited protobufs write the length before the payload (as mentioned in the hint), and the second is how that length is encoded. In order to reduce the message size, protobufs use a variable length encoding for integers. The encoding says that we read a byte at a time (little-endian order), keeping the low-order 7 bits as part of the number. If the 8th bit is unset, then we're done; if it's set, then we continue reading the next byte. Thus, the length 255 is encoded as 0xFF 0x01 ((0xFF .&. 0x7F) .|. (0x01 `shiftL` 7) == 255), 383 is encoded as 0xFF 0x02, and 511 is encoded as 0xFF 0x03. Conversely, the non-problematic 254 is encoded as 0xFE 0x01, and 256 is encoded as 0x80 0x02.

The crucial knowledge, however, is knowing that for some unfathomable reason Java decided to define their byte type as a signed numeric type! On the one hand, this is perverse because bytes are always used when dealing with binary formats and almost never used for actual numerical computation. On the other hand, it is sadistic because every other language in common use (and most of the ones not in common use) have unsigned bytes, so Java's choice to use signed bytes is exquisitely designed to confound any programmer who has ever used another language. More particularly, this means that the implicit conversions between bytes and other numeric types will preserve the signed value, not the representation. Thus, since 0xFF has the signed byte value -1, whenever you use this byte in, say, an int context, it will be silently and implicitly converted into the int -1 as opposed to the int 255 any sane person would expect. Further, since Java comes from the C tradition of languages, it likes to use magical values to indicate errors (e.g., using -1 when only positive values are valid). I'm sure you can see where this is headed.

The InputStream class is one of the plethora of classes for reading from a file. In particular, it's the one used by the CLIPC library for reading from Posix FIFOs. Another awful thing about Java is that it refuses to believe in anything that systems programmers use, because by definition these things aren't portable to every operating system. I believe I've alluded to this before. Among other things it means that Java steadfastly refuses to acknowledge the fact that separate processes can communicate without going through the network stack. If you want RPCs, then there are numerous options available to you. But heaven help you if you want to use IPC in order to avoid that overhead. And don't get me started on signal handling. Luckily for me, CLIPC does the nasty JNI coding necessary to convince Java to believe in FIFOs (and it even works for Windows FIFOs).

Subclasses of InputStream provide a method which allows you to read one byte at a time from wherever it reads from. However, this method has the type int read() where the return value -1 indicates end of file and values 0 through 255 indicate the byte that was read. This is just begging for bugs. You can't just return a byte, since all the bytes from 0x80 to 0xFE (-128 to -2) will be implicitly converted into invalid return values, and the byte 0xFF (-1) will be implicitly converted to EOF.

Using the standard trick for amortizing IO costs, the CLIPC library reads a buffer's worth of data whenever it needs to, and then returns that data as requested. That buffer has the exceptionally reasonable type byte[]. Unfortunately, when reading a single byte it does this innocuous little thing: int returnValue = buffer[i];. Very easy to miss when scanning though hundreds of lines of Java, even when you have an inkling that signed bytes may be at fault. This is why implicit conversion between numeric types is Evil. Haskell does the right thing in requiring all coercions to be explicit. To fix the bug we must do: int returnValue = ((int) buffer[i]) & 0xFF;. Unfortunately, the buffer field is private in FIFOInputStream so I can't make a subclass to fix the bug.

Surprisingly, except for the interpretation of 0xFF as EOF when reading the length header, Google's protobuf code works perfectly fine with all the other errorful byte values since it uses bytewise operations instead of arithmetic. And reads chunks at a time elsewhere, thus avoiding the bug. Good job Google :)

I've sent a bug report to the author of CLIPC, but I've yet to hear back, and cursory googling indicates he may have dropped off the internet a couple years ago. If I haven't heard back by the time I make the first public release of Posta (estimated mid--late May 2011), then I'll have to fork CLIPC since I don't want Posta's license to be dictated by this externality. LGPL 2.1+ is great and all, but this bug is a silly thing to base a license around.

winterkoninkje: shadowcrane (clean) (Default)

I offer you a thought experiment. For your current project you've set up some inter-process communication. Nothing tricky involved, just your standard client–server deal. You've even outsourced the protocol design and are using Google's code to handle all the grody details of serializing and deserializing. Well, okay, on the server side you're using someone else's code but it implements the same protocol, right? Now you run into a bug.

The vast majority of the time everything works smoothly— even verified by taking SHA1 hashes of the messages on both sides and comparing them. But every so often the Java client crashes. In particular, it crashes whenever reading a result message (from the server) of length 255 or 383 and maybe some larger sizes. It does, however, work perfectly fine for intervening message lengths (including 254 and 256). So what's wrong?

A hint ) Another hint ) One last hint )

Solution posted here.

winterkoninkje: shadowcrane (clean) (Default)

This weekend I've been doing a solo hackathon to try to get Posta integrated with our variant of the Mink parser. All the core algorithms have already been implemented, so it's just been a whole lot of yak shaving. Namely, I have to define an IPC protocol for Haskell to talk to Java, implement the (Haskell) server executable and (Java) client stubs, and then try to shake the thing out to find bugs and performance holes. Ideally, by tuesday morning.

Unfortunately, Java doesn't believe in inter-process communication, so all the libraries out there are for doing RPCs. Since the parser and tagger will be operating interactively, and on the same machine, it's quite silly to go through the network stack just to pass a few bytes back and forth. Thankfully I found CLIPC which should do the heavy lifting of getting Java to believe in POSIX named pipes. In order to handle the "on the wire" de/encoding, I've decided to go with Google's protocol buffers since there's already a protobuf compiler for Haskell. I was considering using MessagePack (which also has Haskell bindings), but protobuf seemed a bit easier to install and work with.

For all the plumbing code I decided to try working with iteratees, which have lots of nice performance guarantees. The protobuf libraries don't have integrated support for iteratees, but the internal model is a variant of iteratees so I was able to write some conversion functions. Attoparsec also uses an iteratee-like model internally, and there's integration code available. For my uses I actually need an enumeratee instead of an iteratee, so I had to roll one of my own.

  • TODO: (easy) move the CG2 training module into the library
  • TODO: (low priority) write a CG3 training module
  • DONE: write an unparser for TnT lexicon and trigram files
  • TODO: (easy) write function to feed trained models into the unparser
  • TODO: (postponed) write wrapper executable to train models and print them to files
  • TODO: (postponed) write function to read TnT parser output into models directly (the TnT file parsers were done previously)
  • DONE: choose a library for commandline argument munging cmdargs
  • TODO: add commandline arguments for passing models to the server
  • DONE: write protocol buffer spec for IPC protocol
  • DONE: write Java client handlers for the IPCs
  • TODO: (low priority) write Haskell client handlers for debugging/verification of Java
  • TODO: write Haskell code for dumping intern tables to files, and reading them back in
  • TODO: write Java code for reading intern table files, so the client can dereference the ints
  • DONE: write functions for converting the protobuf Get monad into an iteratee or enumeratee
  • TODO: write Haskell server handlers for the IPCs
  • TODO: write STM code for parallelizing the tagging and IPC handling
  • DONE: write function for converting attoparsec parsers into enumeratees
  • TODO: (low priority) integrate attoparsec enumeratees into model training, etc, to replace top-level calls to many
  • DONE: write lots of other auxiliary functions for bytestring, attoparsec, and iteratee
winterkoninkje: shadowcrane (clean) (Default)

Classes have started up again, whence my month of absence. So I figure it's time to mention what I've been up to.

Over the summer I was working on developing an HMM-based part of speech tagger in Haskell. Most NLP folks consider POS tagging to be a "solved problem", and despite growing long in the teeth TnT (which uses second-order HMMs) is still very close to state-of-the-art; so why bother? Two reasons. Contrary to public opinion, POS tagging is not a solved problem. We can get good accuracy for English which has fixed word order and impoverished morphology, but we still don't really know how to handle morphological languages with free word order. Moreover, the taggers we have, have all been tested extensively on English and similar languages, but we don't really know how well different approaches apply to, say, Turkish, Hungarian, Japanese, Korean, Tzotzil, Quechua,...

The second reason is that my real goal is to handle supertagging for CCG, and in particular to do this for exploring online and interactive algorithms for tagging. Most of the current technology is focused on batch processing and off-line algorithms, which means that it isn't terribly useful for developing, say, an online system for real-time human--robot interaction, nor for exploring questions re the cognitive plausibility of something like supertagging serving a role in human processing of language. For doing this sort of research, TnT is too old and crotchety to work with, and the standard CCG supertaggers (OpenCCG, C&C Tools) are too integrated into their CCG parsing projects to be very amenable either. So, a new tagger writes I.

It is well-known that many common NLP algorithms for HMMs and chart parsing can be generalized to operate over arbitrary semirings. Before realizing this, some algorithms were invented over and over, specialized to different semirings. While it's common in the lore, I've yet to see any codebase that actually takes advantage of this to provide implementations that are generic over different semirings. So one of my secondary goals has been to make this parameterization explicit, and to make sure to do so in a way that doesn't diminish the performance of the tagger. By making the code modular in this way, it should also help when implementing variations on HMMs like higher-order HMMs, autoregressive HMMs, etc. And for doing this sort of thing right, you really need a type system you can trust, which means Haskell (or Agda or Coq). Also, most of the current work has been done in imperative languages only, so using a functional language provides a whole new arena of research on optimizations and the like.

So, that was the summer. Towards the end of the summer I did a writeup for it, though it's not entirely finished yet (i.e., ready for publicity/publication). I've continued developing it for my research assistanceship this year, which means integrating it with a variant of the Malt parser and seeing how well we can do online interactive semantic parsing of military data (which also presents an LM problem due to the huge number of OOVs, acronyms, and the use of terms like "green 42" as names).

winterkoninkje: shadowcrane (clean) (Default)

So, I'm working on a Python project again. And, like usual, I'm finding a dearth of helpful tutorials for advanced hacking. The current problem? I have a module that I want to break up into a package in order to make it easier to mess with, but I want foo/__init__.py to export the same API as the old foo.py did— that is, not breaking old code by requiring extra namespace qualifiers.

I've managed to hack something together that mostly works, but I'm far from pleased. Client code can import the package like usual, and that seems to work. But the library's doctest tests don't work anymore because of an ImportError from python not being able to find submodules, despite having found them just fine for the client programs.

And for trying to debug this the internet is useless. All the searches I've come up with give a pasted copy of the official module tutorial, or are specific projects' FAQ pages giving such helpful explanations as telling me Python can't find the module (duh) or offering such helpful advice as adjusting the Python path so it can ((a) duh, (b) that's not the problem). I've yet to find any resources on how the module/package system actually works to know where things are going wrong.

winterkoninkje: shadowcrane (clean) (Default)

I like Smalltalk. Of any of the OO options it's by far my favorite. And yet, this most powerful language of the '70s has been relegated to oblivion. Robert Martin of Object Mentor Inc. gives a talk at Rails Conf 2009, "What Killed Smalltalk Could Kill Ruby, Too", which is well worth watching. I've since abandoned the whole OO paradigm in favor of functionalism, but I think this talk also has a good deal to say to the Haskell community (in fact, hat tip to Nick Mudge on Planet Haskell).

In particular, around 37:00 to 41:00, Martin talks about one of the three major things to kill Smalltalk. This one is the greatest danger for the Haskell community: arrogance and parochialism as a result of an emphasis on purity. The complaint is a common one, though I think the mention of purity is something which should be taken with depth. (Certainly purity is one of the highest horses we Haskellers will climb upon.) In an interesting addition to the usual dialogue, Martin posits professionalism as the countervailing force we need to maintain in the face of the growth of the community.

I highly recommend the video. The actual talk starts about six minutes in, and after the ending at 50:00 there's a Q&A session with a couple good questions.

winterkoninkje: shadowcrane (clean) (Default)

Joel on Software says somewhere that there are two things every programmer must understand to call themselves a computer scientist. The first: pointers, which can only be understood —in all their subtle horror— by learning C (or assembly). The second is recursion which can only really be learned from pure functional languages (or mathematical topology). Many imperative programmers think they understand recursion, but they don't. Lists are pretty. Trees are cute. Binary is pretty cute. But you don't understand recursion until you've been throwing bananas at bushes and convinced people it makes sense.

Today I present a job problem a coworker thought was impossible. It's a run of the mill problem, says the Haskeller, but it highlights the extent to which Java and imperative thinking cloud the simple mathematical problem and the equally simple functional solution. Consider for the moment that you're writing a program to do parsing-based machine translation. You have a bunch of CFG-like rules. Somewhere in your program you have a function that takes, for each non-terminal position, a list of all parses which can produce the given non-terminal, and you want to try all possible ways of completing each rule. For grammars restricted to only having at most two nonterminals per rule, your function looks something like this:

public void completeCell(a bunch of arguments,
                         ArrayList<Rule> rules,
                         ArrayList<ArrayList<Parse>> allParses) {
    for (Rule r : rules) {
        if (r.arity == 1) {
            for(Parse p : allParses.get(0)) {
                ArrayList<Parse> antecedents = new ArrayList<Parse>();
                antecedents.add(p);
                doCrazyStuff(a bunch of arguments, antecedents);
            }
        } else if (r.arity == 2) {
            for(Parse p0 : allParses.get(0)) {
            for(Parse p1 : allParses.get(1)) {
                ArrayList<Parse> antecedents = new ArrayList<Parse>();
                antecedents.add(p0);
                antecedents.add(p1);
                doCrazyStuff(a bunch of arguments, antecedents);
            }
            }
        } else {
            System.crash("ohnoes, we can only do two!");
        }
}

Did you read all that? Neither did I. But now it's your job to generalize this function so that it will work for rules with an arbitrary number of nonterminals. Obviously, adding extra conditionals with additional loops can only ever get so far. I bring up that the problem domain involves formal language theory, not just because it does, but because it gives us a much cleaner way to think about the problem without code bloat obscuring our vision. Consider the following class of formal languages: someone hands you a sequence of sets of letters, and you must enumerate all sequences of letters consistent with it (that is, the letter at position X is an element of the set at position X, for all X). In particular, your algorithm for enumerating these sequences must work no matter how long the (finite) sequence of (finite) sets is. These are exactly the same problem. All that fluff about what you do with the sequences after you've created them is just that. Fluff. Ditto for what the letters look like inside and why you'd want to do this in the first place. But given this simple problem enumerate :: Seq (Set a) -> Set (Seq a), do you see what the answer is?

See if you can work it out. )

fun

5 Apr 2008 09:57 pm
winterkoninkje: shadowcrane (clean) (Default)

It's been a long time since I've had fun with computers. It hasn't been bad, mind, but over the last few years it's always been work, it's been a while since I just sat down to tinker around with things. Over the last couple weeks I've started playing around with Haskell (again) and with Darcs (for the first time) and it's been fun.

That is all.

winterkoninkje: shadowcrane (clean) (Default)

School's started, or rather, school's gotten under way. I rather enjoy the classes I'm taking (which can be seen on my updated schedule). I've fallen far behind on LJ, so if there's anything you think I should read in the last fortnight~month, let me know.

Since classes've started and as I've been fading out the cat and psu and fading jhu in, I've been doing some reconfiguring of my environment. I've also been learning LaTeX (both for linguistics and for cs) which can be a strange beast to get into the guts of since all the online info is aimed for newcomers only it seems. In any case, it struck me that I've never made that geekmost step and posted my various profiles and macros for bash, vim, mutt, latex,... . Should anyone be interested, at some point in the nearish future (which longtime readers will know to mean a month from now when I get my next spare moment) I think I'll finally do that. It's about time I got back into generating content for this blog.

When I was working at the cat I wrote a lot of helpful little scripts for my sysadminning (and another bunch for ldap) and from classes I have a third batch for some relatively routine machine learning metaprocessing (separating a corpus into pieces for 10-fold cross-validation, etc). While all the scripts and configs are pretty small in themselves, all together they make for an enjoyable suite and others might find a few treasures in there.

Also, since I've been fading jhu in and getting set up for my classes, a PSA. As y'all know, I use OSX as my os of choice. Of all the oses out there it strikes the best balance between usability and configurability, imo; but this is not a sales pitch for mac. Y'see, osx is a posix system but unlike linuces the package management system it comes with is rather feeble and oriented towards gui apps and not basic tools; it does not use Aptitude (of Debian fame), nor RPM (of RedHat fame), nor even Portage (of Gentoo fame). Yes, there are various implementations of these venerable systems for osx, but none of them ship natively.

Thence, the PSA: if you are a developer who is distributing code, you should be distributing the source code itself. Yes, it's very nice of you to offer debian packages or rpms (the corollary PSA is that you should be offering these as well), but you should not require your users to have these installed. If you're evangelizing to mac users, you should not require that they install fink in order to be able to install your code; they should be able to use cvs or subversion and to run the makefiles manually(however painful such an approach may be). Just because you're writing research software does not make you exempt from standard good practices. It's not just mac, this also holds for Solaris and many other posix systems still alive and well in the wild. Open source, means open source; it does not mean, available through conventional package managers.

じゃ、またわね

RSS Atom

April 2019

S M T W T F S
 123456
78910111213
14151617181920
212223242526 27
282930    

Tags

Page generated 31 May 2025 07:04 am
Powered by Dreamwidth Studios