winterkoninkje: shadowcrane (clean) (Default)

One of the classes I'm taking this term alternates between Haskell and Smalltalk in trying to teach a bunch of seniors and graduate students "extreme programming" and how coding in the real world is different from in school. In one of the exercises we were working with an attempt to formulate Haskell-like tuples and lists in Smalltalk, in particular trying to debug the implementation we were given. We found numerous issues with the implementation, but one in particular has been nagging me. It indicates inherent limitations in the syntax of Smalltalk, but mulling it over, it seems to be an even deeper issue than that.

Part of the implementation for tuples[1] was overloading the comma operator (normally string concatenation) to create pairs like (a, b) or triples like (a, b, c) etc. The problem was this, how do we do tuples of tuples? Using code like ((a, b), (c, d)) does not give us a pair of pairs, but rather is equivalent to (a, b, (c, d)). I thought, at first, it was a problem of associativity; when the parser sees the second comma, the one after the b, it takes the preceding object and combines it with what follows, in effect it's acting like an operator for constructing lists. Reversing the associativity of the operator just gives us the same problem in the other direction yielding ((a, b), c, d). This is not an issue for Haskell because the parentheses are required and so they let us know for certain when we're done making a tuple. But in Smalltalk as with most languages, the parentheses are only there as suggestions on how to create a parse tree.

All this diagnosis I did for the exercise, but I've just struck something. There is a deep seated difference between "destructive" and "constructive" operations in any language. Operators like addition, subtraction, and so on are destructive operations. The results they come up with are different from the arguments they start with. When we say 3 + 5 there is nothing of the three or the five in our answer of eight; certainly the answer is derived from the inputs, but the 8 is a unique entity, we could have made it by adding two and six, or subtracting four from twelve, or any number of other operations. Destructive operations are a form of compression, we had two objects, now we have one, something was lost.

But there are other operations too, such as the comma we were trying to make and such as the colon, period, and dollar in Haskell. These constructive operators create a new value which is a composition of it's arguments or a container for them. The list constructed by 1:2:3:[] is not merely derived from the arguments given to the operators, those arguments themselves are a part of the list they are in, they are pieces of the composite whole, we can't make that list without those values (even if the exact sequence of operations to arrive at those values can vary).

This idea of constructive operations should not be new to any computer scientist. It is merely a rephrasing of the idea of abstract algebras. But the distinction between constructive and destructive operations should give light to the very real differences between abstract algebras and real ones. Generally, real and abstract algebras do not interact in the rarified world of mathematics, or to the extent they do they're considered isomorphic.[2] But in the real world, in computers, the two do coexist and they behave very differently.

The problem then is this: how to we define the boundary of a composition? For the case of tuples in Haskell the answer is to make the parentheses required on the periphery and forbidden within, thus in effect making a series of unique operators: the binary (,), the ternary (,,), the 4-ary (,,,), etc. But the problem is a systemic one, and it can be rephrased as this: when do we treat a composition as a single whole vs when do we treat it as a collection of parts? Is a list within a list just a longer list, or is it a list with a composite value? The answers, of course, depend on the language[3] but it's something which always needs specifying and which is always fraught with tradeoffs. In imperative and object-oriented languages this is also known as the problem of when should we use a pointer or reference to a composition vs when should we use the composition itself? In languages with pass-by-reference as well as pass-by-value the whole issue becomes even more hairy because we can offer the illusion of treating compositions as wholes while retaining the syntax of treating them by parts, forcing one to "just know" which functions have which behaviors.[4]

Aside from the question of when to treat a composition as a whole vs a collection of parts, there's another question which is striking. In dealing with the real algebras of destructive operations, parentheses have the meaning of affecting the order of operations, or being a serialized representation of a tree structure. Which means parentheses that don't change the order from it's parentheses-less default can be discarded, and since intermediate values are destroyed, once a parse tree is constructed all of them can be discarded (just as they're unneeded in prefix-only or postfix-only languages). With the abstract algebras of constructive operations, however, parentheses mean something entirely different: they represent the boundaries of the composition, they declare the semantic shift from where we should treat a composition as a collection of parts (that is, as being in the process of being constructed) to treating it as a unified whole (that is, as a single value to be used in other operations). Because they indicate a shift in meaning they can (almost) never be discarded.

And yet, no language I'm familiar with actually makes a distinction between constructive and destructive operations nor makes a distinction between the very different kinds of parentheses.

[1] For those unfamiliar with the term/datastructure, tuples are objects that contain a fixed number (and order) of values which may be of different types; somewhat like a vector. The fixed size is important and makes it different from lists which have variable and mutable size or arrays which have variable (though generally immutable) size (and both of which are typically limited to being collections of a single type of variable). That is, a 2-tuple or pair is an entirely different type than a 3-tuple or triple, unlike different length lists or arrays which can generally be treated the same. For those familiar with C-like languages, tuples are a lot like records or structs though their components don't have names, just positions.

[2] So for example, the natural numbers can be thought of as the value 0 and the value received from applying a successor function to other natural numbers. Hence succ(0) is analogous to "1", succ(succ(0)) is analogous to "2", etc. In this way they're considered isomorphic by mathematicians since we can find a mapping between the inductive/constructive/abstract representation and the normal/destructive representation. But you can start to see the difference if you think about how we would go about writing that successor function. For the abstract structure to hold, there don't exist bit patterns to represent the natural numbers as we're used to seeing them which we could return, instead the successor function (like the zero function) are just data constructors like when we call new to create an instance of an object.

[3] In Perl, it's just a longer list. (Yes, I mean "list" (or perhaps "sequence") and not "array". Perl does have both, though only the latter can be stored as a datatype. But if we're talking about Perl's list-like "arrays", then the answer becomes an array with composite values (stored as references).) In Haskell it's always a list of compositions, and all the scalar values the thing ultimately contains must be of the same monomorphic type. In our Smalltalk assignment above, it's both, depending on where it's at in the list.

[4] Which is not just an idle question in, for example again, Perl. Normally arrays are expanded into lists of their components for passing to functions, hence given a call like foo(@a, @b) the function cannot know where the first array ends and the second begins, it just gets a list of all the values together. But by using prototypes you can get pass-by-reference behavior and so suddenly you can do things like that, but they must be real arrays or hashes not references or (such as they exist) literals. There isn't the same problem in C/C++/Java because in those languages the name of an array is actually a pointer/reference and you never really have the array as a collection of individual components.

Date: 2007-04-25 12:00 pm (UTC)From: [identity profile]
Iirc, Common Lisp makes a distinction between constructive and destructive operations (e.g. append vs. nconc, respectively), as does Scheme, where destructive operations are marked by an '!' (e.g. append vs. append!).

Date: 2007-04-26 02:04 am (UTC)From: [identity profile]
Re Scheme (and Ruby which has the same convention), I'm using the term destructive somewhat differently. They're referring to operations which mutate the object/argument you give them rather than the non-destructive versions which essentially copy the object and mutate the new version to return.

Admittedly, the distinction is a bit different in functional languages since they, by and large, don't have side effects or assignment. The difference I'm highlighting though doesn't have to do with values that get stored in (or bound to) variables (and hence talking about the variables or instances of a value themselves), but rather it has to do with the values themselves in their own world. If that makes any sense.

Date: 2007-04-26 02:29 am (UTC)From: [identity profile]
Also, that's part of the reason I was talking mostly about operators rather than functions. Getting into functions/methods/subroutines muddies the waters since that starts getting into other issues like memory management, side effects, etc.

In functional languages, the distinction is a soft one (just like the distinction between control flow vs functions in languages with some measure of lazy evaluation like Haskell and Smalltalk), but there's still generally some notion of primitive operations which just do one thing vs more complicated functions which do more.

Date: 2007-04-25 05:01 pm (UTC)From: [identity profile]
is it funny to anyone else that a class on "coding in the real world" is using Haskell and Smalltalk?

I suspect that the LISPs have it right: they always preserve structure.

Date: 2007-04-26 02:17 am (UTC)From: [identity profile]

To be fair, the class is called "advanced programming", though it does focus a lot more on the real world (maintenance, testing, legibility over performance per se) than any of the other classes do.

Well, the downside to always preserving structure is two-fold. First is the issue of storage and performance; keeping all that around takes up space which is still limited albeit large. Second is that structure (in the pure sense vis a vis math and programming) has no meaning. There's nothing in the structure that says add(succ(succ(0)),succ(succ(succ(0)))) semantically equals minus(succ(succ(succ(succ(succ(0))))),0). And the only way we can get that semantic meaning is by rendering the structure into some canonical form, which means changing the structure. Math doesn't mean anything. But programs and the numbers we use in daily life, do.

March 2017



Page generated 24 Mar 2017 03:46 pm
Powered by Dreamwidth Studios