winterkoninkje: Shadowcrane (Default)

Katie Miller is giving a talk about FP outreach and diversity at next month's Erlang User Conference. She sent a questionnaire to the Lambda Ladies mailing list about our experiences, and I thought I'd share my responses here.

What led you to pursue functional programming?

Curiosity. I was in grad school, working towards a masters in computer science. And I was particularly interested in programming languages, though I was only familiar with imperative and OO languages at that point. I saw a course on functional programming, so I signed up. Best CS decision I ever made.

What sort of perception did you have of functional programming before you learnt more about it? Was this a barrier? If so, how did you overcome this?

All I knew at the time was that it was some sort of paradigm completely different from imperative and OO. That's it really. This was before FP started becoming popular; so, long before Clojure or Scala were invented, and long before C++ considered adding lambdas/closures to the language. Even within the FP community, Haskell was still considered the new kid on the block (despite having been around for quite some time).

What were the challenges for becoming part of the FP community?

The main challenge was just in figuring out where the community was and how to take part. As I said, this was long before FP became popular. My first FP language was Haskell, but I'd learned it in that course on functional programming so I didn't really know what the community was like. It was a year or two after taking the class that I decided to start really using Haskell for projects. At the time I was taking part in the Perl community, so I thought I'd go searching for some Haskell mailing lists to join. That's when I found the firehose that is Haskell Cafe.

Why do you think women are underrepresented in FP, more so than in programming generally?

I think there are a number of reasons. One of the big ones is how academic the community is. I don't mean that in the way people usually do. I'm an academic, and I love it here! No, the problem is that this creates a huge selection bias. I only really found FP by stumbling into it, and I only stumbled into it because I had a number of supportive advisors who helped foster my interest in programming languages. By the point I found FP, many women would have already been filtered out. Just getting into and affording college is a huge thing, especially for women of color. Let alone making it through undergrad and then getting into a masters program in CS without a bachelor's in CS. Let alone ending up at a school that can offer good FP classes, and finding those supportive advisors to help you along and guide you in the right direction.

If my story is anything to go by, it takes a lot of privilege (and luck) just to get to the point where you discover FP. After that, then you add on all the issues about maintaining community involvement. Because the community is so academic, this heightens issues of impostor syndrome. (Even men are driven out of FP due to impostor syndrome!) And since FP tends to be sold in a hyper-intellectualized manner, this evokes the "math is hard" brand of anti-intellectualism. While this drives a lot of people away, I think it has a differentially powerful impact on women due to the way we gender the sciences. That is, FP propaganda has a habit of taking the things which cause women to be underrepresented in STEM generally, and then cranking them up to eleven.

Another issue, and one I haven't seen discussed very often, is the fact that many FP communities are also FOSS communities. Women are more underrepresented in FOSS than in other CS communities, so the fact that FP tends to be FOSS means that women will tend to be more underrepresented in FP than other CS communities.

What strategies do you think the community could employ to address this problem and improve the (gender, and other types of) diversity in FP?

Setting up communities which aren't so hyper-intellectualized is a big step. Getting rid of all that propaganda and just treating FP like any other paradigm will do a lot to mitigate the impact of impostor syndrome and "math is hard" anti-intellectualism. It's no panacea, but it's probably the easiest thing we can tackle. Addressing the systemic issues is a lot harder.

Do you think it helps to have a women's group like Lambda Ladies? How has it been helpful for you?

I do think it helps. Over the years I've seen a lot of women come and go (mostly go) on Haskell Cafe. Overall I feel like the Cafe is one of the safer and more welcoming communities, but we've still had our misogynistic flareups. And after each one, I've watched the subsequent evacuation as women no longer feel quite so safe or welcome. By offering a safe space, women's groups are an important form of resistance against this sort of problem. It's a space where you don't always have to be on your guard against harassment. It's a space where you don't have to worry about how you present yourself, don't have to worry that femininity will undermine your credibility, don't have to worry about how asking "stupid" questions will affect the way people think of women as a whole.

Also —we don't really do this on LL, but— women's groups can provide a safe environment for venting about the sorts of problems we encounter online, in the work force, etc. Venting is always a tricky topic, but I think the importance of venting is grossly underrated. Whatever community you're a part of, bad things are going to come up sooner or later. When that happens, having a side community where you can let off steam or discuss why the particular thing is problematic is an important way to deal with the emotional fallout of these bad things. Once you've dealt with it, you can return to the main community; but if you have nowhere to deal with it, then things build up and up until you're just done and you quit the community.

In addition to providing a safe space, women's groups also serve an important role regarding announcements for jobs, conferences, etc. The announcements we get are tailored for women and so include important details like how welcoming they are of women, whether they can offer travel expenses, whether they offer child care, and so on.

For me, LL has been helpful mainly as a place to witness women in FP. Just seeing other women is energizing, and keeps me interested in being out there as part of the general FP community. The bit about announcements has also been helpful.

winterkoninkje: Shadowcrane (Default)

So, I just encountered a most delicious type the other day:

class Finite a where
    assemble :: Applicative f => (a -> f b) -> f (a -> b)

What's so nice about it is that the only way you can implement it is if the type a is in fact finite. (But see the notes.) So the questions are:

  • Can you see why?
  • Can you figure out how to implement it for some chosen finite type?
  • Can you figure out how to implement it in general, given a list of all the values? (you may assume Eq a for this one)
  • Can you figure out how to get a list of all the values, given some arbitrary implementation of assemble?
A trivial note ) A big note, also a hint perhaps )
winterkoninkje: Shadowcrane (Default)

I've been working on a tagging library (and executable) for a bit over a year now. When the project started I had the advantage of being able to choose the language to do it in. Naturally I chose Haskell. There are numerous reasons for this decision, some of which have been derided as "philosophical concerns". Certainly some of the reasons why Haskell is superior to other languages do border on the philosophical. Y'know, silly little things like the belief that type systems should prevent errors rather than encouraging them to proliferate. I'm sure you've heard the arguments before. They're good arguments, and maybe they'll convince you to try out Haskell in your basement. But in many so-called "enterprise" settings, anything that even smells like it might have basis in theoretical fact is automatically wrong or irrelevant; whatever you do in the privacy of your basement is your own business, but heaven forbid it have any influence on how decisions are made in the workplace! So, here is a short list of entirely pragmatic, practical, and non-theoretical reasons why Haskell is superior to Java for implementing enterprise programs. More specifically, these are reasons why Haskell is superior for my project. Perhaps they don't matter for your project, or perhaps they'll be enough to convince your boss to let you give Haskell a try. Because design decisions are often project-specific, each point explains why they matter for Posta in particular.

  • Haskell has powerful frameworks for defining modular, high-performance, non-trivial parsers (e.g., Attoparsec). In natural language processing (NLP), just like system administration, over half of the work you do involves dealing with a handful of different ad-hoc poorly defined file formats. Reading them; generating them; converting from one format to another; etc. Because every one of these formats grew out of a slow accretion of features for one particular project, they're riddled with inconsistencies, idiosyncratic magic values, corner cases, and non-context-free bits that require special handling. In Java the premiere tool (so far as I know) for defining parsers is JavaCC. (Like the C tools lex and yacc, JavaCC uses its own special syntax and requires a preprocessor, whereas Attoparsec and the like don't. However, this may be a "philosophical" issue.) However, as of last time I used it, JavaCC is designed for dealing with nice clean grammars used by programming languages and it doesn't handle inconsistent and irregular grammars very well.
  • Posta uses a system of coroutines (called "iteratees") in order to lazily stream data from disk, through the parsers, and into the core algorithms, all while maintaining guarantees about how long resources (e.g., file handles, memory) are held for. This allows handling large files, because we don't need to keep the whole file in memory at once, either in its raw form or in the AST generated by parsing it. For modern enterprise-scale NLP, dealing with gigabyte-sized files is a requirement; because many NLP projects are not enterprise-scale, you get to spend extra time chopping up and reformatting files to fit their limitations. Last time I used JavaCC it did not support incremental parsing, and according to the advertised features it still doesn't. In addition, implementing coroutines is problematic because Java's security model precludes simple things like tail-call optimization--- meaning that you can only support this kind of streaming when the control flow is simple enough to avoid stack overflows.
  • Haskell has awesome support for parallelism. One version, called STM, provides composeable atomic blocks (which matches the way we naturally think about parallelism) combined with lightweight threads (which make it cheap and easy). Java has no support for STM. I am unaware of any support for lightweight threads in Java. The only parallelism I'm aware of in Java is the monitor-style lock-based system with OS threads. As with all lock-based systems, it is non-composeable and difficult to get right; and as with using OS threads anywhere else, there is high overhead which removes the benefits of parallelizing many programs.
  • Posta makes extensive use of partial evaluation for improving performance; e.g., lifting computations out of loops. When doing NLP you are often dealing with triply-nested loops, so loop-invariant code motion is essential for performance. In my benchmarks, partial evaluation reduces the total running time by 10%. If raw numbers don't convince you: using partial evaluation allows us to keep the code legible, concise, modular, and maintainable. The primary use of partial evaluation is in a combinator library defining numerous smoothing methods for probability distributions; the results of which are called from within those triply-nested loops. Without partial evaluation, the only way to get performant code is to write a specialized version of the triply-nested loop for every different smoothing method you want to support. That means duplicating the core algorithm and a lot of tricky math, many times over. There's no way to implement this use of partial evaluation in anything resembling idiomatic Java.
  • Posta uses an implementation of persistent asymptotically optimal priority queues which come with proofs of correctness. A persistent PQ is necessary for one of the tagger's core algorithms. Since the PQ methods are called from within nested loops, performance is important. Since we're dealing with giga-scale data, asymptotics are important. A log factor here or there means more than a 10% increase in total running time. In Java there's java.util.PriorityQueue but it has inferior asymptotic performance guarantees and is neither persistent nor synchronized. I'm sure there are other PQ libraries out there, but I doubt anyone's implemented the exact version we need and shown their implementation to be correct.

I'll admit I'm not up to date on state-of-the-art Java, and I'd love to be proven wrong about these things being unavailable. But a couple years ago when I returned to Java after a long time away, I learned that all the hype I'd heard about Java improving over the preceding decade was just that: hype. I have been disappointed every time I hoped Java has some trivial thing. The most recent one I've run into is Java's complete refusal to believe in the existence of IPC (no, not RPC), but that's hardly the tip of the iceberg.

winterkoninkje: Shadowcrane (Default)

Immutability and Blocks, Lambdas and Closures (André Pang): Mr. Python, please call the office. Your abstractions are leaking.

winterkoninkje: Shadowcrane (Default)

Last week I gave a debugging problem. Well, now it's time for the big reveal. If you'd like to try your hand at it, you should follow that link and stop reading this post. It took me a couple weeks to figure this one out. Given the nature of the symptoms (failure only on messages of lengths 255 and 383) it was pretty clear that this was some obscure and annoying bug, like an off-by-one error. I read through the relevant code for the whole communication stack (Haskell protobufs, Haskell FIFO IO, Java FIFO IO, Java protobufs) with a inkling of what the problem might be, but I still missed it the first time through.

In the hints I suggested that length 511 should also be problematic, and that we were using the length-delimited version of protobufs. For those who aren't familiar with protobufs there are only two relevant details of the encoding. The first is that length-delimited protobufs write the length before the payload (as mentioned in the hint), and the second is how that length is encoded. In order to reduce the message size, protobufs use a variable length encoding for integers. The encoding says that we read a byte at a time (little-endian order), keeping the low-order 7 bits as part of the number. If the 8th bit is unset, then we're done; if it's set, then we continue reading the next byte. Thus, the length 255 is encoded as 0xFF 0x01 ((0xFF .&. 0x7F) .|. (0x01 `shiftL` 7) == 255), 383 is encoded as 0xFF 0x02, and 511 is encoded as 0xFF 0x03. Conversely, the non-problematic 254 is encoded as 0xFE 0x01, and 256 is encoded as 0x80 0x02.

The crucial knowledge, however, is knowing that for some unfathomable reason Java decided to define their byte type as a signed numeric type! On the one hand, this is perverse because bytes are always used when dealing with binary formats and almost never used for actual numerical computation. On the other hand, it is sadistic because every other language in common use (and most of the ones not in common use) have unsigned bytes, so Java's choice to use signed bytes is exquisitely designed to confound any programmer who has ever used another language. More particularly, this means that the implicit conversions between bytes and other numeric types will preserve the signed value, not the representation. Thus, since 0xFF has the signed byte value -1, whenever you use this byte in, say, an int context, it will be silently and implicitly converted into the int -1 as opposed to the int 255 any sane person would expect. Further, since Java comes from the C tradition of languages, it likes to use magical values to indicate errors (e.g., using -1 when only positive values are valid). I'm sure you can see where this is headed.

The InputStream class is one of the plethora of classes for reading from a file. In particular, it's the one used by the CLIPC library for reading from Posix FIFOs. Another awful thing about Java is that it refuses to believe in anything that systems programmers use, because by definition these things aren't portable to every operating system. I believe I've alluded to this before. Among other things it means that Java steadfastly refuses to acknowledge the fact that separate processes can communicate without going through the network stack. If you want RPCs, then there are numerous options available to you. But heaven help you if you want to use IPC in order to avoid that overhead. And don't get me started on signal handling. Luckily for me, CLIPC does the nasty JNI coding necessary to convince Java to believe in FIFOs (and it even works for Windows FIFOs).

Subclasses of InputStream provide a method which allows you to read one byte at a time from wherever it reads from. However, this method has the type int read() where the return value -1 indicates end of file and values 0 through 255 indicate the byte that was read. This is just begging for bugs. You can't just return a byte, since all the bytes from 0x80 to 0xFE (-128 to -2) will be implicitly converted into invalid return values, and the byte 0xFF (-1) will be implicitly converted to EOF.

Using the standard trick for amortizing IO costs, the CLIPC library reads a buffer's worth of data whenever it needs to, and then returns that data as requested. That buffer has the exceptionally reasonable type byte[]. Unfortunately, when reading a single byte it does this innocuous little thing: int returnValue = buffer[i];. Very easy to miss when scanning though hundreds of lines of Java, even when you have an inkling that signed bytes may be at fault. This is why implicit conversion between numeric types is Evil. Haskell does the right thing in requiring all coercions to be explicit. To fix the bug we must do: int returnValue = ((int) buffer[i]) & 0xFF;. Unfortunately, the buffer field is private in FIFOInputStream so I can't make a subclass to fix the bug.

Surprisingly, except for the interpretation of 0xFF as EOF when reading the length header, Google's protobuf code works perfectly fine with all the other errorful byte values since it uses bytewise operations instead of arithmetic. And reads chunks at a time elsewhere, thus avoiding the bug. Good job Google :)

I've sent a bug report to the author of CLIPC, but I've yet to hear back, and cursory googling indicates he may have dropped off the internet a couple years ago. If I haven't heard back by the time I make the first public release of Posta (estimated mid--late May 2011), then I'll have to fork CLIPC since I don't want Posta's license to be dictated by this externality. LGPL 2.1+ is great and all, but this bug is a silly thing to base a license around.

Tags:
winterkoninkje: Shadowcrane (Default)

I offer you a thought experiment. For your current project you've set up some inter-process communication. Nothing tricky involved, just your standard client–server deal. You've even outsourced the protocol design and are using Google's code to handle all the grody details of serializing and deserializing. Well, okay, on the server side you're using someone else's code but it implements the same protocol, right? Now you run into a bug.

The vast majority of the time everything works smoothly— even verified by taking SHA1 hashes of the messages on both sides and comparing them. But every so often the Java client crashes. In particular, it crashes whenever reading a result message (from the server) of length 255 or 383 and maybe some larger sizes. It does, however, work perfectly fine for intervening message lengths (including 254 and 256). So what's wrong?

A hint ) Another hint ) One last hint )

Solution posted here.

winterkoninkje: Shadowcrane (Default)

This weekend I've been doing a solo hackathon to try to get Posta integrated with our variant of the Mink parser. All the core algorithms have already been implemented, so it's just been a whole lot of yak shaving. Namely, I have to define an IPC protocol for Haskell to talk to Java, implement the (Haskell) server executable and (Java) client stubs, and then try to shake the thing out to find bugs and performance holes. Ideally, by tuesday morning.

Unfortunately, Java doesn't believe in inter-process communication, so all the libraries out there are for doing RPCs. Since the parser and tagger will be operating interactively, and on the same machine, it's quite silly to go through the network stack just to pass a few bytes back and forth. Thankfully I found CLIPC which should do the heavy lifting of getting Java to believe in POSIX named pipes. In order to handle the "on the wire" de/encoding, I've decided to go with Google's protocol buffers since there's already a protobuf compiler for Haskell. I was considering using MessagePack (which also has Haskell bindings), but protobuf seemed a bit easier to install and work with.

For all the plumbing code I decided to try working with iteratees, which have lots of nice performance guarantees. The protobuf libraries don't have integrated support for iteratees, but the internal model is a variant of iteratees so I was able to write some conversion functions. Attoparsec also uses an iteratee-like model internally, and there's integration code available. For my uses I actually need an enumeratee instead of an iteratee, so I had to roll one of my own.

  • TODO: (easy) move the CG2 training module into the library
  • TODO: (low priority) write a CG3 training module
  • DONE: write an unparser for TnT lexicon and trigram files
  • TODO: (easy) write function to feed trained models into the unparser
  • TODO: (postponed) write wrapper executable to train models and print them to files
  • TODO: (postponed) write function to read TnT parser output into models directly (the TnT file parsers were done previously)
  • DONE: choose a library for commandline argument munging cmdargs
  • TODO: add commandline arguments for passing models to the server
  • DONE: write protocol buffer spec for IPC protocol
  • DONE: write Java client handlers for the IPCs
  • TODO: (low priority) write Haskell client handlers for debugging/verification of Java
  • TODO: write Haskell code for dumping intern tables to files, and reading them back in
  • TODO: write Java code for reading intern table files, so the client can dereference the ints
  • DONE: write functions for converting the protobuf Get monad into an iteratee or enumeratee
  • TODO: write Haskell server handlers for the IPCs
  • TODO: write STM code for parallelizing the tagging and IPC handling
  • DONE: write function for converting attoparsec parsers into enumeratees
  • TODO: (low priority) integrate attoparsec enumeratees into model training, etc, to replace top-level calls to many
  • DONE: write lots of other auxiliary functions for bytestring, attoparsec, and iteratee
winterkoninkje: Shadowcrane (Default)

Classes have started up again, whence my month of absence. So I figure it's time to mention what I've been up to.

Over the summer I was working on developing an HMM-based part of speech tagger in Haskell. Most NLP folks consider POS tagging to be a "solved problem", and despite growing long in the teeth TnT (which uses second-order HMMs) is still very close to state-of-the-art; so why bother? Two reasons. Contrary to public opinion, POS tagging is not a solved problem. We can get good accuracy for English which has fixed word order and impoverished morphology, but we still don't really know how to handle morphological languages with free word order. Moreover, the taggers we have, have all been tested extensively on English and similar languages, but we don't really know how well different approaches apply to, say, Turkish, Hungarian, Japanese, Korean, Tzotzil, Quechua,...

The second reason is that my real goal is to handle supertagging for CCG, and in particular to do this for exploring online and interactive algorithms for tagging. Most of the current technology is focused on batch processing and off-line algorithms, which means that it isn't terribly useful for developing, say, an online system for real-time human--robot interaction, nor for exploring questions re the cognitive plausibility of something like supertagging serving a role in human processing of language. For doing this sort of research, TnT is too old and crotchety to work with, and the standard CCG supertaggers (OpenCCG, C&C Tools) are too integrated into their CCG parsing projects to be very amenable either. So, a new tagger writes I.

It is well-known that many common NLP algorithms for HMMs and chart parsing can be generalized to operate over arbitrary semirings. Before realizing this, some algorithms were invented over and over, specialized to different semirings. While it's common in the lore, I've yet to see any codebase that actually takes advantage of this to provide implementations that are generic over different semirings. So one of my secondary goals has been to make this parameterization explicit, and to make sure to do so in a way that doesn't diminish the performance of the tagger. By making the code modular in this way, it should also help when implementing variations on HMMs like higher-order HMMs, autoregressive HMMs, etc. And for doing this sort of thing right, you really need a type system you can trust, which means Haskell (or Agda or Coq). Also, most of the current work has been done in imperative languages only, so using a functional language provides a whole new arena of research on optimizations and the like.

So, that was the summer. Towards the end of the summer I did a writeup for it, though it's not entirely finished yet (i.e., ready for publicity/publication). I've continued developing it for my research assistanceship this year, which means integrating it with a variant of the Malt parser and seeing how well we can do online interactive semantic parsing of military data (which also presents an LM problem due to the huge number of OOVs, acronyms, and the use of terms like "green 42" as names).

winterkoninkje: Shadowcrane (Default)

So, I'm working on a Python project again. And, like usual, I'm finding a dearth of helpful tutorials for advanced hacking. The current problem? I have a module that I want to break up into a package in order to make it easier to mess with, but I want foo/__init__.py to export the same API as the old foo.py did— that is, not breaking old code by requiring extra namespace qualifiers.

I've managed to hack something together that mostly works, but I'm far from pleased. Client code can import the package like usual, and that seems to work. But the library's doctest tests don't work anymore because of an ImportError from python not being able to find submodules, despite having found them just fine for the client programs.

And for trying to debug this the internet is useless. All the searches I've come up with give a pasted copy of the official module tutorial, or are specific projects' FAQ pages giving such helpful explanations as telling me Python can't find the module (duh) or offering such helpful advice as adjusting the Python path so it can ((a) duh, (b) that's not the problem). I've yet to find any resources on how the module/package system actually works to know where things are going wrong.

Tags:
winterkoninkje: Shadowcrane (Default)

I like Smalltalk. Of any of the OO options it's by far my favorite. And yet, this most powerful language of the '70s has been relegated to oblivion. Robert Martin of Object Mentor Inc. gives a talk at Rails Conf 2009, "What Killed Smalltalk Could Kill Ruby, Too", which is well worth watching. I've since abandoned the whole OO paradigm in favor of functionalism, but I think this talk also has a good deal to say to the Haskell community (in fact, hat tip to Nick Mudge on Planet Haskell).

In particular, around 37:00 to 41:00, Martin talks about one of the three major things to kill Smalltalk. This one is the greatest danger for the Haskell community: arrogance and parochialism as a result of an emphasis on purity. The complaint is a common one, though I think the mention of purity is something which should be taken with depth. (Certainly purity is one of the highest horses we Haskellers will climb upon.) In an interesting addition to the usual dialogue, Martin posits professionalism as the countervailing force we need to maintain in the face of the growth of the community.

I highly recommend the video. The actual talk starts about six minutes in, and after the ending at 50:00 there's a Q&A session with a couple good questions.

winterkoninkje: Shadowcrane (Default)

Joel on Software says somewhere that there are two things every programmer must understand to call themselves a computer scientist. The first: pointers, which can only be understood —in all their subtle horror— by learning C (or assembly). The second is recursion which can only really be learned from pure functional languages (or mathematical topology). Many imperative programmers think they understand recursion, but they don't. Lists are pretty. Trees are cute. Binary is pretty cute. But you don't understand recursion until you've been throwing bananas at bushes and convinced people it makes sense.

Today I present a job problem a coworker thought was impossible. It's a run of the mill problem, says the Haskeller, but it highlights the extent to which Java and imperative thinking cloud the simple mathematical problem and the equally simple functional solution. Consider for the moment that you're writing a program to do parsing-based machine translation. You have a bunch of CFG-like rules. Somewhere in your program you have a function that takes, for each non-terminal position, a list of all parses which can produce the given non-terminal, and you want to try all possible ways of completing each rule. For grammars restricted to only having at most two nonterminals per rule, your function looks something like this:

public void completeCell(a bunch of arguments,
                         ArrayList<Rule> rules,
                         ArrayList<ArrayList<Parse>> allParses) {
    for (Rule r : rules) {
        if (r.arity == 1) {
            for(Parse p : allParses.get(0)) {
                ArrayList<Parse> antecedents = new ArrayList<Parse>();
                antecedents.add(p);
                doCrazyStuff(a bunch of arguments, antecedents);
            }
        } else if (r.arity == 2) {
            for(Parse p0 : allParses.get(0)) {
            for(Parse p1 : allParses.get(1)) {
                ArrayList<Parse> antecedents = new ArrayList<Parse>();
                antecedents.add(p0);
                antecedents.add(p1);
                doCrazyStuff(a bunch of arguments, antecedents);
            }
            }
        } else {
            System.crash("ohnoes, we can only do two!");
        }
}

Did you read all that? Neither did I. But now it's your job to generalize this function so that it will work for rules with an arbitrary number of nonterminals. Obviously, adding extra conditionals with additional loops can only ever get so far. I bring up that the problem domain involves formal language theory, not just because it does, but because it gives us a much cleaner way to think about the problem without code bloat obscuring our vision. Consider the following class of formal languages: someone hands you a sequence of sets of letters, and you must enumerate all sequences of letters consistent with it (that is, the letter at position X is an element of the set at position X, for all X). In particular, your algorithm for enumerating these sequences must work no matter how long the (finite) sequence of (finite) sets is. These are exactly the same problem. All that fluff about what you do with the sequences after you've created them is just that. Fluff. Ditto for what the letters look like inside and why you'd want to do this in the first place. But given this simple problem enumerate :: Seq (Set a) -> Set (Seq a), do you see what the answer is?

See if you can work it out. )

fun

5/4/08 21:57
winterkoninkje: Shadowcrane (Default)

It's been a long time since I've had fun with computers. It hasn't been bad, mind, but over the last few years it's always been work, it's been a while since I just sat down to tinker around with things. Over the last couple weeks I've started playing around with Haskell (again) and with Darcs (for the first time) and it's been fun.

That is all.

Tags:
winterkoninkje: Shadowcrane (Default)

School's started, or rather, school's gotten under way. I rather enjoy the classes I'm taking (which can be seen on my updated schedule). I've fallen far behind on LJ, so if there's anything you think I should read in the last fortnight~month, let me know.

Since classes've started and as I've been fading out the cat and psu and fading jhu in, I've been doing some reconfiguring of my environment. I've also been learning LaTeX (both for linguistics and for cs) which can be a strange beast to get into the guts of since all the online info is aimed for newcomers only it seems. In any case, it struck me that I've never made that geekmost step and posted my various profiles and macros for bash, vim, mutt, latex,... . Should anyone be interested, at some point in the nearish future (which longtime readers will know to mean a month from now when I get my next spare moment) I think I'll finally do that. It's about time I got back into generating content for this blog.

When I was working at the cat I wrote a lot of helpful little scripts for my sysadminning (and another bunch for ldap) and from classes I have a third batch for some relatively routine machine learning metaprocessing (separating a corpus into pieces for 10-fold cross-validation, etc). While all the scripts and configs are pretty small in themselves, all together they make for an enjoyable suite and others might find a few treasures in there.

Also, since I've been fading jhu in and getting set up for my classes, a PSA. As y'all know, I use OSX as my os of choice. Of all the oses out there it strikes the best balance between usability and configurability, imo; but this is not a sales pitch for mac. Y'see, osx is a posix system but unlike linuces the package management system it comes with is rather feeble and oriented towards gui apps and not basic tools; it does not use Aptitude (of Debian fame), nor RPM (of RedHat fame), nor even Portage (of Gentoo fame). Yes, there are various implementations of these venerable systems for osx, but none of them ship natively.

Thence, the PSA: if you are a developer who is distributing code, you should be distributing the source code itself. Yes, it's very nice of you to offer debian packages or rpms (the corollary PSA is that you should be offering these as well), but you should not require your users to have these installed. If you're evangelizing to mac users, you should not require that they install fink in order to be able to install your code; they should be able to use cvs or subversion and to run the makefiles manually(however painful such an approach may be). Just because you're writing research software does not make you exempt from standard good practices. It's not just mac, this also holds for Solaris and many other posix systems still alive and well in the wild. Open source, means open source; it does not mean, available through conventional package managers.

じゃ、またわね

Tags:

Profile

winterkoninkje: Shadowcrane (Default)
wren gayle romano

July 2014

S M T W T F S
   12345
678 910 11 12
1314151617 1819
20212223242526
2728293031  

Common Tags