winterkoninkje: shadowcrane (clean) (Default)

Life has been good overall. I’ve been wanting to write about various happenings of late (e.g., my trip to Nara for ICFP), but I’ve been terribly busy. Sure, sure, everyone’s busy. But no, my problem is that I have a terrible habit of overcommitting. For the longest time I’d always chalked it up to a personal failing; but lately I’m thinking that’s not quite right. Our society has a way of making us think whatever problems we face must be due to “personal failings”. One of the classic examples here is the Norman Door. But another classic example is the way we blame people with chronic conditions for the ableism they face.

So, what is a “Norman Door”? They’re called that because the problem was first (or most famously) highlighted by Don Norman. Have you ever had a door where you always pull when you’re supposed to push, or always push when you’re supposed to pull? Everyone has, and yet whenever we encounter them we always resort to blaming ourselves. Doors are such simple devices, surely any problems we have must be on us, right? But the problem isn’t us; the problem is the door. We discover how to operate the world around us by making use of affordances: flat/horizontal surfaces afford sitting and putting things on; handles afford grabbing and pulling; vertical surfaces afford pushing and leaning. So when public buildings don’t want people to sit on their ledges, they add bumps so they’re not smooth. When airplanes have surfaces they don’t want you to sit or put your feet on, they make them slanted so things don’t stay put. And a well-designed door is transparent about whether it should be pushed (plates or crashbars) or pulled (handles, especially vertical ones), and transparent about which side of the door needs operating (rather than, say, putting a doorknob in the center of the door). Those doors you can never get right are so difficult to deal with, not because you’re an idiot, but because they are poorly designed: the door’s affordances say to do one thing, when in fact you must do the opposite.

Poor design is ubiquitous, and I could go on all day about it. But the problem of Norman Doors isn’t just a problem of poor design, it’s a problem of social expectations. The problem isn’t just that these doors are annoying. It’s also that we blame ourselves for the failings of their designers. It’s also that this continuous low-grade annoyance exerts a continuous low-grade cost— in time, in flow, in emotional reserves. We get disrupted, frustrated, exhausted, and then we feel bad for not “measuring up” to society’s standards; and we reinforce that guilt by shaming others whenever they fall into the same traps. This is the same trick we play on minoritized people and people with chronic conditions. These people have to pay constant low-grade costs to overcome the iniquities of a society designed against them, but then we train them to blame themselves for encountering those injustices at all, let alone for not having the reserves to go on to lead “a productive life” after being exhausted by microaggressions.

Yes, I have problems overcommitting. But is it a personal failing? I’m not so sure. The problem is less one of not having enough time in an absolute sense, but rather a problem of not having enough spoons. I long ago got used to the constant low-grade costs of sexism, homophobia, transphobia, and saneism. But try as I might, I’ve not been able to get used to the costs of ableism. The sexism et al. was far worse in Bloomington than they are here in Mountain View. But from what I’ve seen so far, academia is far more amenable to folks with my sort of physical disabilities than industry is. It’s not even that Google is bad, per se; and I wouldn’t be surprised if it’s a lot better than most of industry. But even if the grass is browner on the other side, that doesn’t make it green here. The goog is great about providing ergonomic support, and that helps a ton. But the food/wellness program is grossly pro-orthorexic, which means they’re terrible for my dietary needs: I have issues with low-electrolytes, and the whole “salt is bad m’kay” propaganda causes health problems. (Hint: salt is crucial for the proper functioning of neurons. Also for silly things like maintaining blood volume, and hence adequate blood pressure.) I can, of course, bring salt from home or make sure to have extra at breakfast and dinner; but there’s an ongoing cost for keeping extra vigilant about it.

One of the bigger and harder-to-address forms of ableism in industry is the requirement to be ever present. Office life is exhausting. The triggering of sensory hypersensitivity, and accusations “antisociality” for wearing sensory-dep headphones. The expectation to sit still at your desk all day, and judgment for getting up to move around every pomodoro. Being interrogated if you use your cane irregularly, or being invisibilized if you use it regularly. To say nothing of the extreme ubiquitous fat-shaming of California. Many days, I’d be fine to get stuff done if I could work from home, but it takes all my spoons just to be “present”. And after a whole day or a whole week of being “present” I don’t have any energy left to pursue my passions and ambitions. Is it my fault for “over”committing to these passions? Of daring to have ambitions outside of surviving capitalism? Or is it a systemic problem that forces disabled people like myself to absorb the costs of society’s failure to design for the whole variety of human bodies?

winterkoninkje: shadowcrane (clean) (Default)

Last time I talked a bit about ABTs; in particular, I introduced the notion of strongly-typed ABTs (or "GABTs" if you prefer) and showed how we can extend the basic idea of ABTs to guarantee well-typedness in addition to well-aritiedness. However, I also made a note that ensuring this sort of well-typedness runs counter to what Neel and other CMUers often do. One of my colleagues here at IU noticed the reason, so I thought I'd write a bit more about it.

The issue at stake here is how general we can make our ABT library, to minimize the amount of boilerplate needed whenever inventing a new language. By encoding object-language type systems into the kinding of the ABT, we restrict the the possible object languages we can use the ABT implementation for (namely those object languages with type systems that can be embedded into whatever kinding the ABT has). To put a finer point on it, using the kinds presented in the previous post you cannot have binders in your type system. (Edit 2016.02.29: actually the details are more complicated.) This means no System F, and no dependent types. This is unfortunate as the whole point of ABTs is to capture binding structure once and for all!

However, I'd like to reiterate that, for our purposes in Hakaru this limitation is no restriction. Hakaru is simply-typed, so there are no type-level binders in sight. Moreover, we do a lot of program transformations in Hakaru. By using GABTs we can have GHC verify that our program transformations will never produce Hakaru code which is ill-typed, and that our program transformations will always produce Hakaru code of an appropriate type (e.g., the same type as the input term, for things like partial evaluation; but we have a number of type-changing transformations too). Thus, even though our GABT library could not be reused for implementing languages with type-level binders, it still provides a substantial benefit for those languages without type-level binders.

Although our GABTs cannot handle type-level binders, that does not mean we're restricted to only working with simply typed languages. For example, intersection types are not usually thought of as "simple types"; but they do not require binders and so they're fine. More generally, Larry Moss is engaged in a research project where he asks, "given infinite time, how far could Aristotle have gotten in logic?" By which he means, given the Aristotelian restriction to syllogistic logics (i.e., ones without the quantifiers introduced by Frege), what are the limits in what we can cover? It turns out that we can cover quite a lot. Some syllogistic logics go beyond the power of the "Peano–Frege" boundary: they can handle comparing cardinality of sets! A good pictorial summary of this line of research is on slide 2 of this talk; and a bit more about the complexity results is given in this talk (the requisite picture is on slide 39).


Edit 2016.02.29: In actuality, there's nothing inherent in type theory that prohibits having type-level binders for our object language; it's a limitation in GHC. In particular, GHC doesn't allow lifting GADTs into data kinds. If we could lift GADTs, then we could simply use ABTs to define the syntax of object-language type expressions, and lift those to serve as the type indices for using ABTs to define the syntax of object-language term expressions. This stratified approach is sufficient to handle System F and any other non-dependent quantifiers. To go further and handle dependent quantifiers as well, we'd also need to be able to define the object-language's terms and types in a mutually inductive way.

winterkoninkje: shadowcrane (clean) (Default)

Edit 2015.10.29: Be sure to also read the followup post on the benefits and limitations of this approach compared to the usual untyped ABTs.

Earlier this year Neel Krishnaswami talked about abstract binding trees (ABTs) [part 1] [part 2]. IMO, the best way to think about ABTs is as a generalization of abstract syntax trees (ASTs), though this is not a perspective sanctioned by the CMUers I’ve talked to. CMUers oppose this way of phrasing things, in part, because the ABT libraries they’re familiar with make crucial use of the design pattern of two-level types; but I think the essential insights of ABTs and two-level types are quite different, and we ought to keep the benefits of these two techniques distinct.

Over the past year I’ve been working on the inferential language1 Hakaru, and in the new version of the compiler we’re using ABTs for our syntax trees. However, contrary to Neel’s stance against using strongly-typed internal representations for syntax, we extend the ABT approach to make use of GADTs to guarantee local well-typedness— since this in turn can be used to guarantee that program transformations are also well-typed. (If you don’t want those guarantees, then take a look at Jon Sterling’s abt library on Hackage2.) In this post I’m going to present a simplified version of our architecture, and then talk about some of the extra stuff bringing it closer to our production architecture.

First things first

Since we want everything to be well-typed, we first must introduce some universe, U, of all the types in our language. (In Haskell we can implement such a universe by using the -XDataKinds extension, so I’ll equivocate between calling U a “universe” vs a “kind”.) For the rest of this post it doesn’t actually matter what lives in that universe3, just so long as things match up when they need to. Since the choice of universe is irrelevant, we could abstract over U by turning on the -XPolyKinds extension; but I avoid doing so below, just to help keep things more concrete.

Implementing ASTs

The simplest way of thinking about well-typed ASTs is that they capture the set of terms generated by a (typed) signature; that is, the fixed point of some Σ [U] U . Unpacking the type for Σ, we have that every syntactic constructor sΣ is associated with some arity (the length of the list), each argument to s has some type in U (the elements of the list), and applying s to the right number of ASTs of the right types will generate a new AST with some type in U (the second argument to Σ).

To implement this fixed point we define an AST type which is parameterized by its signature. To ensure well-aritiedness (and well-typedness) of our ASTs with respect to that signature, we’ll need to introduce a helper type SArgs4. And to ensure that we obtain the least fixed-point of the signature, we’ll make everything strict.

infix  4 :$
infixr 5 :*

data SArgs  (U  )  [U]   where
    End   SArgs ast []

    (:*)  !(ast u)
          !(SArgs ast us)
          SArgs ast (u : us)

data AST  ([U]  U  )  U   where
    (:$)  !(σ us u)
          !(SArgs (AST σ) us)
          AST σ u

Implementing ABTs

The problem with ASTs is that they have no notion of variables, and thus have no notion of variable binding. Naively we could implement binders like lambda-abstraction by having something like λ Σ [u, v] (u :→ v) but then we’d need to do a post-hoc check to ensure that the first argument to λ is in fact a variable. To build that check into the datatype itself we’d have to move λ into the definition of AST (since the first argument is of type Variable u rather than AST Σ u). If lambda-abstraction were the only binder we had, that might not be so bad; but any real-world language has a plethora of binders, and this approach doesn’t scale.

The essential idea behind ABTs is to abstract over the notion of binding itself. Given a single uniform definition of what it means to be a binding form, we don’t have to worry about adding a bunch of ad-hoc constructors to our AST datatype. Moreover, we can then provide single uniform definitions for things which mess with variables and are homomorphic over the signature. Things like capture-avoiding substitution and providing a HOAS API for our first-order representation.

The crucial step is to adjust our notion of what a signature contains. The basic signatures used above only contained applicative forms; i.e., things we can apply to locally-closed terms; i.e., what are called “functors” in the logic programming community. For ABTs we’ll want to allow our signatures to include any generalized quantifier. That is, our signatures will now be of type Σ [[U] × U] U . Previously, the arguments were indexed by U; now, they’re indexed by [U] × U. The length of the list gives the number of variables being bound, the types in the list give the types of those variables, and the second component of the pair gives the type of the whole locally-open expression.

To implement this we need to extend our syntax tree to include variable bindings and variable uses:

data SArgs  ([U]  U  )  [[U] × U]   where
    End   SArgs abt []

    (:*)  !(abt vs u)
          !(SArgs abt vus)
          SArgs abt ((vs,u) : vus)

data ABT  ([[U] × U]  U  )  [U]  U   where
    (:$)  !(σ vus u)
          !(SArgs (ABT σ) vus)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Time for an example of how this all fits together. To add lambda-abstraction to our language we’d have λ Σ [([u],v)] (u :→ v): that is, the λ constructor takes a single argument which is a locally-open term, binding a single variable of type u, and whose body has type v. So given some x Variable u and e ABT Σ [] v we’d have the AST (λ :$ Bind x e :* End) ABT Σ [] (u :→ v).

“Local” vs “global” well-typedness

With the ABT definition above, every term of type ABT Σ vs u must be locally well-typed according to the signature Σ. I keep saying “locally” well-typed because we only actually keep track of local binding information. This is an intentional design decision. But only tracking local well-typedness does have some downsides.

So what are the downsides? Where could things go wrong? Given a locally-closed term (i.e., either Var x or f :$ e) any free variables that occur inside will not have their U-types tracked by Haskell’s type system. This introduces some room for the compiler writer to break the connection between the types of a variable’s binder and its use. That is, under the hood, every variable is represented by some unique identifier like an integer or a string. Integers and strings aren’t U-indexed Haskell types, thus it’s possible to construct a Variable u and a Variable v with the same unique identifier, even though u and v differ. We could then Bind the Variable u but Var the Variable v. In order to ensure global well-typedness we need to ensure this can’t happen.

One way is to keep track of global binding information, as we do in the paper presentation of languages. Unfortunately, to do this we’d need to teach Haskell’s typechecker about the structural rules of our language. Without a type-level implementation of sets/maps which respects all the axioms that sets/maps should, we’d be forced to do things like traverse our ASTs and rebuild them identically, but at different type indices. This is simply too hairy to stomach. Implementing the axioms ourselves is doubly so.

Or we could fake it, using unsafeCoerce to avoid the extraneous traversals or the complicated pattern matching on axioms. But doing this we’d erase all guarantees that adding global binding information has to offer.

A third approach, and the one we take in Hakaru, is compartmentalize the places where variables can be constructed. The variable generation code must be part of our trusted code base, but unlike the unsafeCoerce approach we can keep all the TCB code together in one spot rather than spread out across the whole compiler.

Stratifying our data types

The above definition of ABTs is a simplified version of what we actually use in Hakaru. For example, Hakaru has user-defined algebraic data types, so we also need case analysis on those data types. Alas, generic case analysis is not a generalized quantifier, thus we cannot implement it with (:$). We could consider just adding case analysis to the ABT definition, but then we’d start running into extensibility issues again. Instead, we can break the ABT type apart into two types: one for capturing variable uses and bindings, and the other for whatever syntax we can come up with. Thus,

data Syntax  ([[U] × U]  U  )  ([U]  U  )  U   where
    (:$)  !(σ vus u)
          !(SArgs abt vus)
          Syntax σ abt u

data ABT  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ (ABT σ) u)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Of course, since we’re going to be extending Syntax with all our language-specific details, there’s not a whole lot of benefit to parameterizing over σ. Thus, we can simplify the types considerably by just picking some concrete Σ to plug in for σ.

By breaking Syntax apart from ABT we can now extend our notion of syntax without worrying about the details of variable binding (which can be defined once and for all on ABT). But we could still run into extensibility issues. In particular, often we want to separate the fixed-point portion of recursive types from their generating functor so that we can do things like add annotations at every node in the recursive data type. A prime example of such annotations is keeping track of free variables, as in Neel’s original post. To allow this form of extensibility we need to break up the ABT type into two parts: the recursion, and the Syn/Var/Bind view of the ABT.

data ABT  ([U]  U  )  [U]  U   where
    Unview  !(View σ (ABT σ) vs u)  ABT σ vs u

view  ABT σ vs u  View σ (ABT σ) vs u
view (Unview e) = e

data View  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ abt u)
          View σ abt [] u

    Var   !(Variable v)
          View σ abt [] v

    Bind  !(Variable v)
          !(View σ abt vs u)
          View σ abt (v : vs) u

Now, to allow arbitrary annotations we’ll replace the data type ABT with an equivalent type class. Each instance of the ABT class defines some sort of annotations, and we can use the view and unview methods to move between the instance and the concrete View type.

There’s one last form of extensibility we may want to add. Using fixed point combinators gives us a way of describing complete trees. A different way of introducing recursion is with free monads. The free-monad combinator is just like the fixed-point combinator, except that we have an additional type parameter for metavariables and we have a data constructor for using those metavariables instead of requiring the recursion to ground out with a complete syntax tree. The reasons why this might be nice to do are beyond the scope of this post, but the point is we might want to do that so we need to split the ABT class into two parts: one for the recursion itself, and another for the annotations.

In the end, we have a four-level type: the Syntax, the View, the annotations, and the recursion.


[1] In the accepted/current parlance, Hakaru is a “probabilistic programming language”; but I and a number of other folks working on such languages have become disaffected with that term of late, since it’s not entirely clear what should and should not count as a “probabilistic” PL. Over the course of a number of discussions on the topic, I’ve settled on “inferential” PL as describing what is (or, at least what I find) interesting about “probabilistic” PL. I’ve been meaning to write a post about the subject, and hopefully this footnote will remind me to do so.

[2] N.B., the indexing used in that package is what we get if we erase/compactify the universe U. That is: the erasure of U is a singleton set; the erasure of [U] is isomorphic to the Peano numbers; the erasure of [[U] × U] is isomorphic to a list of Peano numbers; etc.

[3] Though at one point I assume we have functions, (:→), just for the sake of an example.

[4] Ideally we’d be able to flatten this type to avoid all the overhead of the linked list implementation. In fact, the entire AST node of (:$) together with its SArgs should be flattened. These nodes have the same general layout as the heap objects in the STG machine: a record with a pointer to the data constructor (i.e., element of the signature) followed by an appropriate number of arguments; and so in principle we ought to be able to implement them directly with a single STG heap object.

winterkoninkje: shadowcrane (clean) (Default)

I finally got around to posting the slides for a talk I gave twice this summer: Probability Smoothing for NLP: A case study for functional programming and little languages. The first version of the talk was presented at the McMaster Workshop on Domain Specific Lanaguages (and Ed Kmett has posted a video of that version on YouTube) with the presentation focused on EDSLs, with smoothing given as an example. The second version was presented at the AMMCS minisymposium on Progress and Prospects in Model-Based Scientific Software Development, where the focus was more on the domain itself and how the use of a DSL allows ensuring correctness, modularity, and maintainability of code for developing probability models. The slides are essentially the same for both talks, with the benchmarks updated a bit in the latter.

As you may have surmised, this is but a small facet of the Posta project I was working on last year. I had meant to submit it as a functional pearl for ICFP, but the timing didn't work out for that. After giving the McMaster version of the talk, Ed convinced me that I should publish the code for the smoothing DSL separately from the rest of Posta. So he's the one to blame about my being so slow in releasing the Posta code I promised this summer. Though seriously, I'd been considering breaking up and reorganizing the code anyways. Now that I'm back from ICFP and all my traveling over the summer, I hope to get that code pushed out soon. Sorry for the delay y'all.

winterkoninkje: shadowcrane (clean) (Default)

I've been working on a tagging library (and executable) for a bit over a year now. When the project started I had the advantage of being able to choose the language to do it in. Naturally I chose Haskell. There are numerous reasons for this decision, some of which have been derided as "philosophical concerns". Certainly some of the reasons why Haskell is superior to other languages do border on the philosophical. Y'know, silly little things like the belief that type systems should prevent errors rather than encouraging them to proliferate. I'm sure you've heard the arguments before. They're good arguments, and maybe they'll convince you to try out Haskell in your basement. But in many so-called "enterprise" settings, anything that even smells like it might have basis in theoretical fact is automatically wrong or irrelevant; whatever you do in the privacy of your basement is your own business, but heaven forbid it have any influence on how decisions are made in the workplace! So, here is a short list of entirely pragmatic, practical, and non-theoretical reasons why Haskell is superior to Java for implementing enterprise programs. More specifically, these are reasons why Haskell is superior for my project. Perhaps they don't matter for your project, or perhaps they'll be enough to convince your boss to let you give Haskell a try. Because design decisions are often project-specific, each point explains why they matter for Posta in particular.

  • Haskell has powerful frameworks for defining modular, high-performance, non-trivial parsers (e.g., Attoparsec). In natural language processing (NLP), just like system administration, over half of the work you do involves dealing with a handful of different ad-hoc poorly defined file formats. Reading them; generating them; converting from one format to another; etc. Because every one of these formats grew out of a slow accretion of features for one particular project, they're riddled with inconsistencies, idiosyncratic magic values, corner cases, and non-context-free bits that require special handling. In Java the premiere tool (so far as I know) for defining parsers is JavaCC. (Like the C tools lex and yacc, JavaCC uses its own special syntax and requires a preprocessor, whereas Attoparsec and the like don't. However, this may be a "philosophical" issue.) However, as of last time I used it, JavaCC is designed for dealing with nice clean grammars used by programming languages and it doesn't handle inconsistent and irregular grammars very well.
  • Posta uses a system of coroutines (called "iteratees") in order to lazily stream data from disk, through the parsers, and into the core algorithms, all while maintaining guarantees about how long resources (e.g., file handles, memory) are held for. This allows handling large files, because we don't need to keep the whole file in memory at once, either in its raw form or in the AST generated by parsing it. For modern enterprise-scale NLP, dealing with gigabyte-sized files is a requirement; because many NLP projects are not enterprise-scale, you get to spend extra time chopping up and reformatting files to fit their limitations. Last time I used JavaCC it did not support incremental parsing, and according to the advertised features it still doesn't. In addition, implementing coroutines is problematic because Java's security model precludes simple things like tail-call optimization--- meaning that you can only support this kind of streaming when the control flow is simple enough to avoid stack overflows.
  • Haskell has awesome support for parallelism. One version, called STM, provides composeable atomic blocks (which matches the way we naturally think about parallelism) combined with lightweight threads (which make it cheap and easy). Java has no support for STM. I am unaware of any support for lightweight threads in Java. The only parallelism I'm aware of in Java is the monitor-style lock-based system with OS threads. As with all lock-based systems, it is non-composeable and difficult to get right; and as with using OS threads anywhere else, there is high overhead which removes the benefits of parallelizing many programs.
  • Posta makes extensive use of partial evaluation for improving performance; e.g., lifting computations out of loops. When doing NLP you are often dealing with triply-nested loops, so loop-invariant code motion is essential for performance. In my benchmarks, partial evaluation reduces the total running time by 10%. If raw numbers don't convince you: using partial evaluation allows us to keep the code legible, concise, modular, and maintainable. The primary use of partial evaluation is in a combinator library defining numerous smoothing methods for probability distributions; the results of which are called from within those triply-nested loops. Without partial evaluation, the only way to get performant code is to write a specialized version of the triply-nested loop for every different smoothing method you want to support. That means duplicating the core algorithm and a lot of tricky math, many times over. There's no way to implement this use of partial evaluation in anything resembling idiomatic Java.
  • Posta uses an implementation of persistent asymptotically optimal priority queues which come with proofs of correctness. A persistent PQ is necessary for one of the tagger's core algorithms. Since the PQ methods are called from within nested loops, performance is important. Since we're dealing with giga-scale data, asymptotics are important. A log factor here or there means more than a 10% increase in total running time. In Java there's java.util.PriorityQueue but it has inferior asymptotic performance guarantees and is neither persistent nor synchronized. I'm sure there are other PQ libraries out there, but I doubt anyone's implemented the exact version we need and shown their implementation to be correct.

I'll admit I'm not up to date on state-of-the-art Java, and I'd love to be proven wrong about these things being unavailable. But a couple years ago when I returned to Java after a long time away, I learned that all the hype I'd heard about Java improving over the preceding decade was just that: hype. I have been disappointed every time I hoped Java has some trivial thing. The most recent one I've run into is Java's complete refusal to believe in the existence of IPC (no, not RPC), but that's hardly the tip of the iceberg.

winterkoninkje: shadowcrane (clean) (Default)

Immutability and Blocks, Lambdas and Closures (André Pang): Mr. Python, please call the office. Your abstractions are leaking.

NIH

12 Mar 2011 11:55 pm
winterkoninkje: shadowcrane (clean) (Default)

I find it terribly unfortunate how susceptible academics are to Not Invented Here syndrome. Especially in disciplines like computer science where one of the primary acts of research is the creation of artifacts, a great amount of time and money are wasted replicating free publicly available programs. Worse than the effort wasted constructing the initial artifact is the continuous supply of effort it takes to maintain and debug these copies of the original. It's no wonder that so much of academic software is unreliable, unmaintained, and usable only by the developing team.

It's reasons like this why I support the free/open-source development model, demonstrated in academic projects like Joshua and GHC. The strong infusion of real-world software engineering methodologies that come from designing reliable software in F/OSS and industry seems to be the only way to save academia from itself.

winterkoninkje: shadowcrane (clean) (Default)

One of the classes I'm taking this term alternates between Haskell and Smalltalk in trying to teach a bunch of seniors and graduate students "extreme programming" and how coding in the real world is different from in school. In one of the exercises we were working with an attempt to formulate Haskell-like tuples and lists in Smalltalk, in particular trying to debug the implementation we were given. We found numerous issues with the implementation, but one in particular has been nagging me. It indicates inherent limitations in the syntax of Smalltalk, but mulling it over, it seems to be an even deeper issue than that.

Part of the implementation for tuples[1] was overloading the comma operator (normally string concatenation) to create pairs like (a, b) or triples like (a, b, c) etc. The problem was this, how do we do tuples of tuples? Using code like ((a, b), (c, d)) does not give us a pair of pairs, but rather is equivalent to (a, b, (c, d)). I thought, at first, it was a problem of associativity; when the parser sees the second comma, the one after the b, it takes the preceding object and combines it with what follows, in effect it's acting like an operator for constructing lists. Reversing the associativity of the operator just gives us the same problem in the other direction yielding ((a, b), c, d). This is not an issue for Haskell because the parentheses are required and so they let us know for certain when we're done making a tuple. But in Smalltalk as with most languages, the parentheses are only there as suggestions on how to create a parse tree.

All this diagnosis I did for the exercise, but I've just struck something. ExpandThere is a deep seated difference between "destructive" and "constructive" operations in any language. )

winterkoninkje: shadowcrane (clean) (Default)

So here's another crazy idea I have for Eng. When writing code, particularly at a low level, there's frequently call for doing basic sanity checks. That is, you often see code like this if ($foo == nil) { omg die! } elsif ($foo->bar()) { blah blah; } scattered liberally around everywhere. And especially with method calls when you got your object from some potentially unreliable source you see if ($foo and $foo->bar()) all over. Wouldn't it be nice if we could abstract that away?

I've had a few different ideas in the general area of different semantics of "return values" — for example you have different senses of "return value" when you're talking about the evaluation of a function/method/operator, the error code for failed evaluation, and returning "this" object so you can chain method calls together — but I recently had an idea that might be easier to implement in a similar field: we could do sanity checking with a special postfix operator ?. You would use it like $foo?->bar() and it would have short-circuit semantics so that if $foo passes the sanity check it behaves like a no-op, but if $foo fails sanity checking then it will skip evaluating bar() because trying to evaluate it will probably explode, what with $foo not being sane and all. In case there are different types of unsanity it would also be helpful to have some special variable ( $? perhaps) that stores the type of unsanity, or have perhaps some other way to deal with unsanity handling.

The challenge is where exactly to short-circuit to. If we have a statement that's a chain of method or field calls like $foo->bar()->baz->bif()->zorch()->quux; then it makes sense, wherever the ? is, to skip everything from there to the end of the chain/statement. But in other circumstances things get more complicated.

For instance, what do we do if it's in a conditional statement? If it's just a basic conditional statement like if ($foo?) or if ($foo?->bar) then it would make sense to have the whole statement return false. But if it's in a compound conditional statement like if ($foo?->bar() or $zorch->bazzle()) then it would make sense to skip the call to bar(), fail the first subexpression, and so evaluate the second. We could say that in boolean contexts the expression returns false, but that is contingent on what we mean by "expression" here.

Another example of complexity is how it interacts with non-method calls, such as arithmetic expressions. If we have $foo?->numeric() + 5 and $foo isn't sane, then what should the expression return? Well maybe the whole greater expression should fail (i.e. be skipped), that sounds sensible. Now what happens in non-algebraic expressions, like assignment? Should $foo = $bar?->baz() skip the assignment too, or should it set $foo to a known-bad value like nil? In case that question seems too straight forward, compare it against foo($bar?->baz(), $bif); should we call foo() with a known-bad value, or should we short-circuit out of ever calling foo()? Also, since ? is intended to simplify code, we must expect that callers won't necessarily test $? unless they really care about why ? failed.

A brief digression into implementation details. When calling ? we obviously need to pass it the thing we're testing. But at an assembly level, in order to know where to short-circuit to we also need to pass a label to jump back to. If during the course of evaluating sanity we determine the object's not sane, we can't just do a normal return because that wouldn't give us the short-circuit semantics we desire. The easiest way to get our short-circuiting would be to overwrite the memory location that stores the return address with the label we wish to jump to, and then when we return we'll return to somewhere else. In the event that some architectures disallow overwriting the return address, we'll have to manually pop a stack frame and do an unconditional jump, or use an inline function instead of a real function call, or devise some other method. If we allow operator overloading, overloading ? will have to be treated specially since we're not using a normal return.

Back to semantics. So far, I can identify six contexts that to be specified: method chains, boolean expressions, arithmetic expressions, string expressions, function calls, and assignment. And two general approaches: returning nil/false/zero/lambda or skipping evaluation. Since the whole point of this sanity checking operator is to avoid Doing Bad Things(tm) I'm thinking that we should err on the side of skipping evaluation, but there are certain places where just jumping to the next statement is jumping further than we may like. Inside conditional expressions — particularly disjunctions — is the big one, but also boolean expressions used as shorthand for control flow (like Perl's famous open($fh, ">", $file) or die "omg!";). Perhaps when the ? is buried in a boolean context it will skip the rest of evaluation for that whole subexpression and return false to the boolean operator, but in all other situations it just skips to the next statement. That sounds like a sensible enough place to start for Doing The Right Thing.

winterkoninkje: shadowcrane (clean) (Default)

So I've been messing around with my profile the last couple days. For the most part I tend to let it get rather stale, but every so often I go in to shuffle the interests around a bit, curse the 150 limit, see who's friended me, etc. And lately I've started taking part in an aspect of lj I've largely ignored until now.

One of the interesting things about lj is the social aspect to it. Not just the comments and the webwork of friending but also the communities, the interest searching, the schools, u.s.w. Lately I've been joining a number of communities and meeting new folks, mostly other Portlanders. Much as I prefer a number of the features of my own blogging software, it was never my intention to add that social aspect. I designed it to ease writing posts, in particular posts like mine which tend to have footnotes, references, and the like. I do have plans to add in comments, but that's about it.

If I were to write something for socializing, I'd probably write an engine for sophisticated interests tagging; remove the blogging entirely or have it just be a hook into your blog site of choice. The more I think about how I'd design such an interests machine, the more it starts to resemble certain other projects of mine. The first is a tagging engine for keeping track of large quantities of media files (this is anime, that's a photo from [livejournal.com profile] urban_decay, this is a Don Hertzfeldt short, that's pr0n,...). The second is to deal with one of my biggest gripes about iTunes: namely to provide a sophisticated system for categorizing genres, e.g. sometimes you want to be specific (ebm, darkwave, japanese swing, koto,...) and sometimes you just want to be general (electronica, japanese,... or heck: music).

Expandon semantic-space tagging problems )
winterkoninkje: shadowcrane (clean) (Default)

So, as you know, I recently got a cell phone, my first ever to be exact. Add so, of late I've been exploring the wonders of awkward interfaces, bizarre limitations, and vendor crippled hardware. And the first thing I thought was, y'know? We need linux on this thing.

If we could only get a well understood, free, open operating system on one of these things we would finally have the universal communicators we've always dreamed about. If we had such a CellOS we could use cells to communicate over typical instant messaging protocols, to communicate over irc, to check emails, to freely copy our program preferences between our cells and our pcs.

Expandin which the future unfolds ) Expandand what it takes to fold it )
winterkoninkje: shadowcrane (clean) (Default)

There's a conceptual problem that's been plaguing me with Eng. For a first writing on the language, this isn't the area I would normally choose. But the most logical starting points — e.g. lexical aliasing, code metastructure — I've a pretty good handle on. Those are topics I will delve into in the future, but for now, this is my problem.

One of the design considerations for Eng is to be able to provide crossplatform natively parallel code. [1] When I say this, one should bear in mind that this goal alone is as large or larger than all the other goals for the language combined. There's a specific facet to a specific part of my attempt to do this which is causing me troubles, but before that, some history.

Back in the 1980s parallelism was very much in vogue. Then, come the late-1980s/early-1990s when microprocessor speeds started bounding through the roof, everyone decided parallelism was stupid and abandoned most research on it. (Though of course, there are always a few diehards for any technology.) These days, as we're starting to reach the limits of current fabrication technologies, and with the shape of the curve between speed and cost and how it's evolved over time, the field of parallelism is starting to simmer again with a passion fit to boil over the coming decade.

To see this, just take a look at the current market: basic home computers are shipping with 2~4 processors, many of those are dual- or (will soon be) quad-core processors, or have hyperthreading. And if that home market doesn't convince you, then take a look at console game platforms like the xbox360 or the ps3 (or their previous counterparts). And this doesn't even get into the server market which is growing ever faster as computers become persistantly more ubiquitous tools for every endeavor and every business. Nor does it get into distributed systems like the microchip in your keyboard and mouse. Let alone the scientific markets for weather modelling, VLSI, and AI — i.e. those diehards who never left.

...

ExpandNow that you're convinced that a natively-parallel language is essential, the first part of my problem.  )
winterkoninkje: shadowcrane (clean) (Default)

Okay, so here's another link roundup, sue me. I should get back to real posts soon enough. I've determined that I spend wa~y too much time colocating links from all my various sources (yeah yeah, hush you), and so I'm planning on easing off. In my last post I mentioned that I'm officially pruning TechEBlog from the things I'll post here about, and a while back I decided that most links found through livejournal will make their way around soon enough without me. But that still leaves me with a number of sources before even getting into the random stuff I actually found myself cruising around.

For those who follow my weblogs via the RSS feeds, heads up. I'm planning on separating out all the link roundups (and similar links-only posts) to their own blog, including retroactively. That should help folks who only care about my more essayic posts, or only for the links. It may be a while yet until I do that since I've still yet to get this site switched over to Titania from its immediate predecessor; there'll be an announcement on the website updates blog when it does happen, maybe also here. For those who're following from livejournal, I'll still mirror the links posts (I finally got my mirroring script set up :). I'll also start mirroring the f/oss blog since folks seemed interested in hearing about Eng.

ExpandLess talking, more linking )

Eng?

17 Mar 2006 06:30 am
winterkoninkje: shadowcrane (clean) (Default)

As some of you may know, I've been considering the design of a new programming language lately. (And for those who haven't yet heard me talking about it, I will... at great lengths... whether you want me to or not.) This has been one of the things which has kept me busy and away from blogging of late. I'm not sure how much I should ramble on about it here though.

In working through my thoughts on the language, I'm writing an essay in three parts which mainly aims to justify why I'm doing this. (Before the abundance of circumlocutions becomes overwhelming, let us call this new language "Eng".) The first part is basically a laundry list of little features or attributes that Eng — or any language — would need to thrive against the current competitors. The second covers what I deem the fundamental Laws of Language Design, and these perhaps are the crux of the piece; certainly they are the part of the essay which is most liable to be useful outside of organizing my thoughts on Eng. And the third section of the essay is intended to introduce some ideas I have which would make this new language special. But there's a problem.

Expand... )
winterkoninkje: shadowcrane (clean) (Default)

So I was reading an article on Pyramid recently, and the attendant discussion on the Pyramid discussion boards, which got me thinking. I often don't realize how much the internet has evolved in my lifetime, or how immersed I've been in that change. True, I'm no Great Old One who had the honor of dealing with punch-cards or actual terminals. But if those venerable souls were the ones who wrought the net from the unliving stone, then I am of the first progeny that genesis has spawned. Not even an early-adopter, for that would presume something to be adopted, but rather an embodiment of the time from which sprang the devices for adoption.

Looking back over the net, even just over my megre life, I have seen the invention and death of countless technologies. Technologies which have at their core the one simple truth about the internet. The net, as with the whole of human endeavor is concerned solely with the production and consumption of meaning, with the conveyance of thought and information.

To that end we have invented countless ways to communicate that meaning, from the very fundament of language itself, to books and radio and television, to bulletin boards, email, usenet, IRC, webpages, instant messaging, weblogs, newsfeeds, podcasts, voice-over-IP, wikis, and countless others. And over time we've seen the revival of a number of these forgotten souls in the new era from bulletin boards like Craigslist, to the reinvention of terminals with lessdisks, to the purported renaissance of IRC.

And yet, when e'er these ideas return they are thought of as novel and all too often they fear to look at their predecessors to learn from the faults of times past. An interesting thing is that the difficulties with each of these technologies are remarkable in their similarity despite their disparate implementations. The problem of spam originated on usenet if not before. And since then it has spread to email, IM, and even wikis. And so it is with the myriad of other difficulties.

Upon reflection, one thing which I find is lacking, is a unified system which categorizes these different technologies, a single language with which to discuss and compare them. A language which could be almost deceptive in its simplicity. There are a small number of axes on which these forms of communication can be rated.

ExpandOne possible ontology follows )
winterkoninkje: shadowcrane (clean) (Default)

For those who haven't heard (i.e. don't read /.) Linus Torvalds has recently blasted Gnome in favor of KDE. While the /. note is overly concise, the actual thread is pretty interesting (Linus' comments: [1], [2]; Gnome's responses: [1], [2], [3]). Part of the big rivalry between KDE and Gnome has always been centered around questions of design which invariably are tied to questions of user audience.

KDE says users must be able to configure everything (a side effect of targeting hackers who like to have absolute control no matter what the cost), whereas Gnome says things should just work (with the side effect of removing functionality/configurability). The designer in me says those aren't mutually exclusive ideals, but it does raise a set of interesting questions and some interesting problems for open-source mostly centering around the fact that hackers are rarely good interface designers.

Which, given their history makes sense— if you're writing a program or library to automate some set of tasks (where "you" can be a specific individual or some cadre of individuals), naturally you're going to have it abstract away the things you personally want abstracted away (if you abstract nothing away then what are you doing?), and leave in the widgets you want to play with (if you abstract everything away then how can you adapt to novel situations?).

Expandfriends page friendly cut )
RSS Atom

April 2019

S M T W T F S
 123456
78910111213
14151617181920
212223242526 27
282930    

Tags

Page generated 15 Jun 2025 08:06 am
Powered by Dreamwidth Studios