winterkoninkje: shadowcrane (clean) (Default)

It's official, I'm heading to Google at the end of July to work with Mark Larson on the Chrome OS security team (of all things!). Seems an unlikely match, but Mark was pretty keen on my background (including the gender stuff!) and wasn't put off by my fusion of linguistics, type-theory, constructive logic/maths, et al. I guess the security team is more concerned with semantic models and constructive correctness than the other teams I talked with. Makes sense, I suppose, but it's sad that these are still thought of as "security" concerns rather than "everything in programming" concerns.

I'm totally looking forward to it, but I am still thinking of it as a bit of an experiment. I'm an academic at heart, so I could see myself heading back to academia in a few years. (Wanting to be a professor is, afterall, what motivated me to start gradschool in the first place.) Then again, I've never really thought of industry as being interested in the sorts of theory I work on. In any case, I'll still be around at the various conferences I've been frequenting; I won't just disappear into industry.

I know a bunch of y'all are in the bay area. I'd love to hear any pointers you have on apartment hunting or on what neighborhoods (nearish Mt View) are nice— i.e., places an artistic disabled radical queer woman might like. Feel free to comment below or get in touch by email, twitter, etc.

winterkoninkje: shadowcrane (clean) (Default)

One of the things that intrigues me most about the rise of the internet is the ways in which it fundamentally alters social structures. Virtual and physical reality afford different forms of interaction, differences people are only beginning to acknowledge. These affordances don't make one aspect of reality "better" than the other, just different. That difference is unavoidable, but my real interest lies in the ways these aspects of reality co-define one another. Events like GamerGate expose how violence can fulminate in virtual spaces until it spills over into physical spaces. But I believe that an understanding of how virtuality and physicality reinforce each other can also be used to construct a safer and more just world.

Over the past week I had a number of conversations with folks about these differences between physicality and virtuality. On tuesday I talked with Rob for a few hours about how economies of information and economies of material are incommensurable (tangled up with a discussion of the systems- and game-theoretic roots of why capitalism is fundamentally flawed)— an interesting topic, but one for another post. But I also talked with S, and later briefly with Lindsey, about an anthropological concern I've been meaning to write about for some while.

There is an unavoidably accidental nature to physical reality. Physicality has an immediacy that cannot be ignored, and the inability to freely reroute through other pathways means we are constantly bumping into one another. Asking a stranger for directions or if this seat is taken; overhearing conversations on the bus; running into that little-known would-be-friend on the street; and yes also the ill-met accidents. In contrast, virtuality is —for now— inescapably intentional. You can't just stumble upon new friendships, but must go looking for them. You can't foster relationships through the familiarity of a quiet presence, but must speak out to maintain them.

To reiterate, neither intentionality nor accidentiality is inherently superior— especially for those of us on the margins. Most of my relationships have started out accidentally. To pick a few: I met K when I happened to audit a class she was in, and she recognized me from the comic store she works at; I met S when she caught me checking her out in a busy hallway; I met L when her boyfriend at the time was ignoring her at a mutual friend's birthday party. None of these interactions are the sort that avail themselves online. We may recognize handles seen elsewhere, but that seldom leads to a "where do I know you from? let's have dinner" moment. While we can flirt online, it's much harder to feel out the other party for whether they're receptive. And we cannot readily see those sitting silent in dejected corners. But despite whatever accidental beginnings, my deepest relationships have always been grown online. Relationships are always forged in a shared vulnerability, but the experiences of us on the margins are often too vulnerable to speak aloud. Exposing the details of a life of violence and minoritization requires the emotional safety of intentional spaces. The abilities to edit, to wait, to breathe, to digest, to scroll through history; for those of us who have never had safety in the physical world, these abilities provide a structure that permits us to to be vulnerable without risking our health.

But while intentionality can be used to construct spaces in which to open ourselves, it also constructs spaces which trap us into ourselves. To find safety in online spaces it is necessary to be able to block out those who would cause us harm; but the intentionality of this blocking out makes it easy to block out too much, and so to lock us into our ways of being. A key example here is the possibility for reconciliation. In any relationship there is always the threat of breakage. When a relationship breaks we block the other out, but in physical spaces these blocks seldom last forever. At first we may avoid places the other frequents, but in time this fades from memory. When living in physical proximity there is always the possibility for an accidental encounter, and in that accident the possibility for reconciliation. Walking down the street we can bump into ex-friends and ex-lovers, and depending on the circumstances of the break, these bumps can provide a means for renewal— to wit: a chance for growth and change. However, in virtual spaces we have no mechanism for such accidents. Once we block or mute others, there is never an incentive to revisit these choices. Ironically, the less we recall the slight —and so the greater the chance for reconciliation—, the less likely we are to revisit the choice, since doing so requires an explicit intention to reconnect, and that intention risks renewing the rupture since it's explicitness calls to mind the reason for the separation.

I think the possibility of reconciliation is necessary for healthy communities. (E.g., the inability to reconcile is, imo, part of why politics in the US have grown ever more polarized.) But it is unclear how to develop a virtual society which affords reconciliation. Simply having blocks expire in some timely fashion is unacceptable; most blocks stem not from broken relationships, but rather from the need to defend oneself from violence. The locality of physical space provides a strong defense against certain forms of violence: you can (at least in principle) move away from bigots and abusers. Whereas the non-locality of virtual space precludes this defense: there is no "elsewhere" to go. Of course, this non-locality is also one of the greatest strengths of virtual spaces, as it enables marginalized peoples to connect over long physical distances.

I don't yet have a conclusion. It's just something I've been thinking about off and on for a few years. And was reminded since bumping into an ex-friend; though, alas, we didn't get the chance to try and reconnect.

Tags:
winterkoninkje: shadowcrane (clean) (Default)

Many of us with disabilities have more than one. This multiplicity is invisiblized by the abled community. When our lives are already assumed to be defined by disability, admitting multiplicity risks the hazard of letting the complexities of disabled lives enter one's mind. But even among the disabled, there's a tendency to focus on the one or two things which most obviously impact our lives. This is a coping mechanism. To cope with lacking spoons, we are always prioritizing our energies, and there is never enough to solve all the things. But also, ableism being what it is, we must be careful never to "complain too much" lest we loose whatever ears we've gained; so we consign our suffering to silence, that we might grasp at crumbs of compassion for hope that when things worsen there may still be someone who'll listen.

I have my "one or two things": depression and cPTSD. And I've mentioned my migraines on occasion, though they're seldom of bloggable interest. But there's one I've never talked about, one I've still not come to terms with myself. That's the thing about chronic pain. Noone ever teaches us about all the things that shouldn't hurt, about all the pains most people don't have. And consequently we come to normalize them, to unsee the ways they make us choose —in small ways at first— to restrict our lives. Last week I met a fabulous girl and we got to talking about disability. And with one sentence she cut through me like a thunderbolt, cut through a silence I hadn't even realized I'd been spinning for years. Her words, so simple:

I have a connective tissue disease

I've suspected it for a couple decades, known it for nearly a decade, but it's never been something I've been allowed to talk about. When a teen complains about joint pain, it is dismissed as an insignificance. When a twentysomething does, everyone older jests and jeers; "just wait till you're my age," they say. Sit down. Shut up. Respect your elders. If you're resilient enough to keep at it, to endure the shame and go to a doctor... well, doctors have ways of silencing things they can't cure. When I first saw a doctor for my knees, he acted like it was nothing, like I was a stupid kid bitching about nothing— despite saying, with surprise in his voice, how my x-rays looked like someone 15–20 years older. When I pressed, when I refused to be cowed, he told me there was nothing modern science could do: I could use a splint, but that'd weaken the muscles and exacerbate the problem; I could try working out to strengthen the muscles —at least, for as long as I could stand the pain— but that'd only slow the inevitable by a couple years at best; it wasn't bad enough for surgery, besides that'd just cause even more damage. "You're young," he said in flat monotone, like words rehearsed without meaning. Like pointing out something broken or left behind, when you really don't care if they hear you. Your coffee. Your wallet. Your tail light. You're young.

The thing about genetic issues is that they pervade everything. It's never a singular problem, it's a cascade of them, a death by ten-thousand papercuts. In my childhood, my mother always had issues with her knees. It was almost a joke how often she went in for surgeries on them; the kind of joke people only mumble and noone laughs at but they tell it anyways because they don't know what else to do. During my early college years, her shoulders started going out. A few years back my sister died spontaneously, and within a few months a cousin joined her. Aortic ruptures. In the last year or so, my mother had an aortic dissection. She survived, but more from luck than anything. I happened to be in Maryland when she was in the hospital, and I visited. She'd also been having catastrophic spinal problems. My parents didn't even bother mentioning it until she went in for the first surgery. It didn't go well. Three followup surgeries later and who knows if any of it did any good. Sitting next to her as she lay in that bed, her hands all locked up in pain, held in mine, I could barely look on her. Because I know I'll live to be crippled and die twisted in pain. She's had enough in-patient PT to be released, and is back home now on out-patient PT. Noone talks about it. But at least noone jokes anymore.

I can't say if it was her heart or her back that somehow managed to convince some doctor to take a closer look. He'd thought she had Marfan syndrome and ordered a genetic screening. Tests came back negative. Followups found it's actually Loeys-Dietz, something that wasn't even discovered until ten years ago, and the docs only knew of it because she'd been admitted to the hospital where they discovered it. There's no point in testing the dead, but there's little doubt about what did my sister and cousin in. I've been checked for aortic problems, and show no symptoms as yet. I'll have to get checked again every couple years.

(One of the funniest things about transitioning is how it's been the healthiest decision I've ever made. If I'd've known all the minor health issues it'd cure, I would've fought harder to do it when I was 18. Among the things it helped was my back. While uncommon, HRT can cause corrections in one's hips and lower ribs. Thanks to the changes in my hips and my center of gravity, I no longer have chronic back pain. Growing up I could never attain correct posture: it caused pain and felt unnatural; whereas now it comes freely and without thinking.)

But the litany of little pains isn't what hurts the most. I used to draw. It used to be my life. The fire in my heart, as maths is the breath in my chest. I'd do it when I wasn't thinking. I'd do it to focus my thinking. I'd come home and spend hours at it. I'd ignore eating to finish a piece. I won awards. I thought I'd make a vocation of it. By halfway through undergrad I could barely finish a small sketch in the margins of my notes. Many of my friends are artists (e.g.), and while I love their work, a hateful demon grows in me every time I see their successes or hear them praised. These days I can barely hold a pencil. My script an ever more illegible shorthand as I try to eke out a few more pages before I resign to sitting behind a computer. (The most creative parts of doing math, for me, needs being written. It is only once I have the sketch of a thing can I put it to pixels.) Just bringing up my art, acknowledging it as something lost rather than as something I lost time for, crushes me.

That girl, that blessed fabulous girl. A few days after we'd met I asked her about her ring, a beautiful curious thing, like two rings locked together at an angle. Turns out it's a surgical splint for preventing hyperextension. She told me where to get one, and on the bus yesterday I decided to check out their website. Reading through the descriptions of the rings they offer —I don't even... How do you name that emotion when a pain you've had so long you've forgotten it exists is suddenly eased, that lift, that release, that letting go. Like when you find someone who shares your very same marginalization, that feeling where you can just talk, can let words free without censor knowing they have already been understood before they are spoken. That sudden finding oneself not alone. That slow creeping into existence of a future worth looking toward. I had to turn off my browser. Can't be crying on busses. Can't be weak in public.

winterkoninkje: shadowcrane (clean) (Default)

The next week+ I'll be in St. Petersburg Florida for PEPM, PADL, POPL, PPS, and PPAML PI (also CPP and OBT). Would've mentioned it sooner, but it's a bit of a last minute thing. I love reconnecting with old friends and meeting new folks, so feel free to come say hi. If you want to meet up for dinner or such, leave a comment with when/where to find you, or just look for the tall gal with the blue streak in her hair.

winterkoninkje: shadowcrane (clean) (Default)

It's a new year, and the new semester starts next week. To be honest, the last semester ended on a down note: the week of Thanksgiving, depression hit again. It'd been about five years since the last major episode, so better than usual if by just a bit. The last week or so thing's've started looking up again, but who knows how long that'll last or how long till it really clears up.

A bunch of friends, fellow academics with mental illness, have given newyears posts on how they're dealing with their MI; posts I've found somehow more uplifting than I would've expected. And so, thus inspired, I'm hoping to return to blogging regularly. The plan is to post regularly on tuesdays and (prolly irregularly) thursdays. (Edit 2016.01.13) Let's make that regularly on thursdays and irregularly on tuesdays.

So, with that said, I'll see y'all again on tuesday.

winterkoninkje: shadowcrane (clean) (Default)

Last time I talked a bit about ABTs; in particular, I introduced the notion of strongly-typed ABTs (or "GABTs" if you prefer) and showed how we can extend the basic idea of ABTs to guarantee well-typedness in addition to well-aritiedness. However, I also made a note that ensuring this sort of well-typedness runs counter to what Neel and other CMUers often do. One of my colleagues here at IU noticed the reason, so I thought I'd write a bit more about it.

The issue at stake here is how general we can make our ABT library, to minimize the amount of boilerplate needed whenever inventing a new language. By encoding object-language type systems into the kinding of the ABT, we restrict the the possible object languages we can use the ABT implementation for (namely those object languages with type systems that can be embedded into whatever kinding the ABT has). To put a finer point on it, using the kinds presented in the previous post you cannot have binders in your type system. This means no System F, and no dependent types. This is unfortunate as the whole point of ABTs is to capture binding structure once and for all!

However, I'd like to reiterate that, for our purposes in Hakaru this limitation is no restriction. Hakaru is simply-typed, so there are no type-level binders in sight. Moreover, we do a lot of program transformations in Hakaru. By using GABTs we can have GHC verify that our program transformations will never produce Hakaru code which is ill-typed, and that our program transformations will always produce Hakaru code of an appropriate type (e.g., the same type as the input term, for things like partial evaluation; but we have a number of type-changing transformations too). Thus, even though our GABT library could not be reused for implementing languages with type-level binders, it still provides a substantial benefit for those languages without type-level binders.

Although our GABTs cannot handle type-level binders, that does not mean we're restricted to only working with simply typed languages. For example, intersection types are not usually thought of as "simple types"; but they do not require binders and so they're fine. More generally, Larry Moss is engaged in a research project where he asks, "given infinite time, how far could Aristotle have gotten in logic?" By which he means, given the Aristotelian restriction to syllogistic logics (i.e., ones without the quantifiers introduced by Frege), what are the limits in what we can cover? It turns out that we can cover quite a lot. Some syllogistic logics go beyond the power of the "Peano–Frege" boundary: they can handle comparing cardinality of sets! A good pictorial summary of this line of research is on slide 2 of this talk; and a bit more about the complexity results is given in this talk (the requisite picture is on slide 39).

winterkoninkje: shadowcrane (clean) (Default)

I'm sick. I shouldn't be online. But just wanted to prattle on about a thing that'd take too long on twitter. A day or two ago I came across a linguist being cited somewhere in some article about celebrity couple name blends. In it they noted how certain syllables like "klol" and "prar" are forbidden in English. They phrased the restriction as forbidding CRVR (where C means a consonant, R means a liquid/sonorant —I forget how they phrased it—, and V a vowel).

There's something of merit going on here, but the specifics are far more complicated. Note that "slur" is perfectly acceptable. So, well maybe it's just that the C has to be a plosive. But then, "blur" is perfectly fine too. So, well maybe it's something special about how "-ur" forms a stand-alone rhotic vowel. But then "trill" and "drill" are just fine. So, well maybe...

winterkoninkje: shadowcrane (clean) (Default)

Edit 2015.10.29: Be sure to also read the followup post on the benefits and limitations of this approach compared to the usual untyped ABTs.

Earlier this year Neel Krishnaswami talked about abstract binding trees (ABTs) [part 1] [part 2]. IMO, the best way to think about ABTs is as a generalization of abstract syntax trees (ASTs), though this is not a perspective sanctioned by the CMUers I’ve talked to. CMUers oppose this way of phrasing things, in part, because the ABT libraries they’re familiar with make crucial use of the design pattern of two-level types; but I think the essential insights of ABTs and two-level types are quite different, and we ought to keep the benefits of these two techniques distinct.

Over the past year I’ve been working on the inferential language1 Hakaru, and in the new version of the compiler we’re using ABTs for our syntax trees. However, contrary to Neel’s stance against using strongly-typed internal representations for syntax, we extend the ABT approach to make use of GADTs to guarantee local well-typedness— since this in turn can be used to guarantee that program transformations are also well-typed. (If you don’t want those guarantees, then take a look at Jon Sterling’s abt library on Hackage2.) In this post I’m going to present a simplified version of our architecture, and then talk about some of the extra stuff bringing it closer to our production architecture.

First things first

Since we want everything to be well-typed, we first must introduce some universe, U, of all the types in our language. (In Haskell we can implement such a universe by using the -XDataKinds extension, so I’ll equivocate between calling U a “universe” vs a “kind”.) For the rest of this post it doesn’t actually matter what lives in that universe3, just so long as things match up when they need to. Since the choice of universe is irrelevant, we could abstract over U by turning on the -XPolyKinds extension; but I avoid doing so below, just to help keep things more concrete.

Implementing ASTs

The simplest way of thinking about well-typed ASTs is that they capture the set of terms generated by a (typed) signature; that is, the fixed point of some Σ [U] U . Unpacking the type for Σ, we have that every syntactic constructor sΣ is associated with some arity (the length of the list), each argument to s has some type in U (the elements of the list), and applying s to the right number of ASTs of the right types will generate a new AST with some type in U (the second argument to Σ).

To implement this fixed point we define an AST type which is parameterized by its signature. To ensure well-aritiedness (and well-typedness) of our ASTs with respect to that signature, we’ll need to introduce a helper type SArgs4. And to ensure that we obtain the least fixed-point of the signature, we’ll make everything strict.

infix  4 :$
infixr 5 :*

data SArgs  (U  )  [U]   where
    End   SArgs ast []

    (:*)  !(ast u)
          !(SArgs ast us)
          SArgs ast (u : us)

data AST  ([U]  U  )  U   where
    (:$)  !(σ us u)
          !(SArgs (AST σ) us)
          AST σ u

Implementing ABTs

The problem with ASTs is that they have no notion of variables, and thus have no notion of variable binding. Naively we could implement binders like lambda-abstraction by having something like λ Σ [u, v] (u :→ v) but then we’d need to do a post-hoc check to ensure that the first argument to λ is in fact a variable. To build that check into the datatype itself we’d have to move λ into the definition of AST (since the first argument is of type Variable u rather than AST Σ u). If lambda-abstraction were the only binder we had, that might not be so bad; but any real-world language has a plethora of binders, and this approach doesn’t scale.

The essential idea behind ABTs is to abstract over the notion of binding itself. Given a single uniform definition of what it means to be a binding form, we don’t have to worry about adding a bunch of ad-hoc constructors to our AST datatype. Moreover, we can then provide single uniform definitions for things which mess with variables and are homomorphic over the signature. Things like capture-avoiding substitution and providing a HOAS API for our first-order representation.

The crucial step is to adjust our notion of what a signature contains. The basic signatures used above only contained applicative forms; i.e., things we can apply to locally-closed terms; i.e., what are called “functors” in the logic programming community. For ABTs we’ll want to allow our signatures to include any generalized quantifier. That is, our signatures will now be of type Σ [[U] × U] U . Previously, the arguments were indexed by U; now, they’re indexed by [U] × U. The length of the list gives the number of variables being bound, the types in the list give the types of those variables, and the second component of the pair gives the type of the whole locally-open expression.

To implement this we need to extend our syntax tree to include variable bindings and variable uses:

data SArgs  ([U]  U  )  [[U] × U]   where
    End   SArgs abt []

    (:*)  !(abt vs u)
          !(SArgs abt vus)
          SArgs abt ((vs,u) : vus)

data ABT  ([[U] × U]  U  )  [U]  U   where
    (:$)  !(σ vus u)
          !(SArgs (ABT σ) vus)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Time for an example of how this all fits together. To add lambda-abstraction to our language we’d have λ Σ [([u],v)] (u :→ v): that is, the λ constructor takes a single argument which is a locally-open term, binding a single variable of type u, and whose body has type v. So given some x Variable u and e ABT Σ [] v we’d have the AST (λ :$ Bind x e :* End) ABT Σ [] (u :→ v).

“Local” vs “global” well-typedness

With the ABT definition above, every term of type ABT Σ vs u must be locally well-typed according to the signature Σ. I keep saying “locally” well-typed because we only actually keep track of local binding information. This is an intentional design decision. But only tracking local well-typedness does have some downsides.

So what are the downsides? Where could things go wrong? Given a locally-closed term (i.e., either Var x or f :$ e) any free variables that occur inside will not have their U-types tracked by Haskell’s type system. This introduces some room for the compiler writer to break the connection between the types of a variable’s binder and its use. That is, under the hood, every variable is represented by some unique identifier like an integer or a string. Integers and strings aren’t U-indexed Haskell types, thus it’s possible to construct a Variable u and a Variable v with the same unique identifier, even though u and v differ. We could then Bind the Variable u but Var the Variable v. In order to ensure global well-typedness we need to ensure this can’t happen.

One way is to keep track of global binding information, as we do in the paper presentation of languages. Unfortunately, to do this we’d need to teach Haskell’s typechecker about the structural rules of our language. Without a type-level implementation of sets/maps which respects all the axioms that sets/maps should, we’d be forced to do things like traverse our ASTs and rebuild them identically, but at different type indices. This is simply too hairy to stomach. Implementing the axioms ourselves is doubly so.

Or we could fake it, using unsafeCoerce to avoid the extraneous traversals or the complicated pattern matching on axioms. But doing this we’d erase all guarantees that adding global binding information has to offer.

A third approach, and the one we take in Hakaru, is compartmentalize the places where variables can be constructed. The variable generation code must be part of our trusted code base, but unlike the unsafeCoerce approach we can keep all the TCB code together in one spot rather than spread out across the whole compiler.

Stratifying our data types

The above definition of ABTs is a simplified version of what we actually use in Hakaru. For example, Hakaru has user-defined algebraic data types, so we also need case analysis on those data types. Alas, generic case analysis is not a generalized quantifier, thus we cannot implement it with (:$). We could consider just adding case analysis to the ABT definition, but then we’d start running into extensibility issues again. Instead, we can break the ABT type apart into two types: one for capturing variable uses and bindings, and the other for whatever syntax we can come up with. Thus,

data Syntax  ([[U] × U]  U  )  ([U]  U  )  U   where
    (:$)  !(σ vus u)
          !(SArgs abt vus)
          Syntax σ abt [] u

data ABT  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ (ABT σ) u)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Of course, since we’re going to be extending Syntax with all our language-specific details, there’s not a whole lot of benefit to parameterizing over σ. Thus, we can simplify the types considerably by just picking some concrete Σ to plug in for σ.

By breaking Syntax apart from ABT we can now extend our notion of syntax without worrying about the details of variable binding (which can be defined once and for all on ABT). But we could still run into extensibility issues. In particular, often we want to separate the fixed-point portion of recursive types from their generating functor so that we can do things like add annotations at every node in the recursive data type. A prime example of such annotations is keeping track of free variables, as in Neel’s original post. To allow this form of extensibility we need to break up the ABT type into two parts: the recursion, and the Syn/Var/Bind view of the ABT.

data ABT  ([U]  U  )  [U]  U   where
    Unview  !(View σ (ABT σ) vs u)  ABT σ vs u

view  ABT σ vs u  View σ (ABT σ) vs u
view (Unview e) = e

data View  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ abt u)
          View σ abt [] u

    Var   !(Variable v)
          View σ abt [] v

    Bind  !(Variable v)
          !(View σ abt vs u)
          View σ abt (v : vs) u

Now, to allow arbitrary annotations we’ll replace the data type ABT with an equivalent type class. Each instance of the ABT class defines some sort of annotations, and we can use the view and unview methods to move between the instance and the concrete View type.

There’s one last form of extensibility we may want to add. Using fixed point combinators gives us a way of describing complete trees. A different way of introducing recursion is with free monads. The free-monad combinator is just like the fixed-point combinator, except that we have an additional type parameter for metavariables and we have a data constructor for using those metavariables instead of requiring the recursion to ground out with a complete syntax tree. The reasons why this might be nice to do are beyond the scope of this post, but the point is we might want to do that so we need to split the ABT class into two parts: one for the recursion itself, and another for the annotations.

In the end, we have a four-level type: the Syntax, the View, the annotations, and the recursion.


[1] In the accepted/current parlance, Hakaru is a “probabilistic programming language”; but I and a number of other folks working on such languages have become disaffected with that term of late, since it’s not entirely clear what should and should not count as a “probabilistic” PL. Over the course of a number of discussions on the topic, I’ve settled on “inferential” PL as describing what is (or, at least what I find) interesting about “probabilistic” PL. I’ve been meaning to write a post about the subject, and hopefully this footnote will remind me to do so.

[2] N.B., the indexing used in that package is what we get if we erase/compactify the universe U. That is: the erasure of U is a singleton set; the erasure of [U] is isomorphic to the Peano numbers; the erasure of [[U] × U] is isomorphic to a list of Peano numbers; etc.

[3] Though at one point I assume we have functions, (:→), just for the sake of an example.

[4] Ideally we’d be able to flatten this type to avoid all the overhead of the linked list implementation. In fact, the entire AST node of (:$) together with its SArgs should be flattened. These nodes have the same general layout as the heap objects in the STG machine: a record with a pointer to the data constructor (i.e., element of the signature) followed by an appropriate number of arguments; and so in principle we ought to be able to implement them directly with a single STG heap object.

winterkoninkje: shadowcrane (clean) (Default)

Hi NCTE,

I took the US Trans Survey a while back and had some feedback, though I forgot to include it in the survey itself. In particular, my comment is about the gender/identity portion of the survey (question 2.2). It's great that you include a variety of nonbinary identities (fa'afafine, mahu, genderqueer, genderfluid, two-spirit, third gender,...) but I noticed a severe bias in terms of the more binary options. You offer a variety of options for masculine-of-center folks (AG, butch, stud,...) but you offer zero options for feminine-of-center folks. This contributes to the ongoing problem of femme/feminine erasure in feminist and LGBT communities. I identify very strongly as femme, in fact it's one of the few labels I "identify" with— far moreso than I "identify" with trans, fwiw.

While I did write in under the "other" option, I worry that this only serves to contribute to the ongoing erasure and underrepresentation of feminine identities. Having worked on a number of these sorts of surveys, here's a short list of reasons why:

  • Feminine-of-center folks will be undercounted because many will not feel strongly enough to write in.
  • The presentation of specific choices for masculine-of-center identities helps to funnel people towards particular terms, improving their statistical significance; whereas the lack of explicit options for feminine-of-center folks will result in a larger diversity of terms, thus decreasing each term's likelihood of reaching significance.
  • Attempting to correct for the previous point by aggregating all feminine-of-center identities serves to erase the diversity of femme/feminine identities: "femme" forms no more a singular identity than AG/butch/stud/etc forms a singular identity. The distinctions between high femme, hard femme, tomboy femme, etc are just as important as the distinctions between varieties of nonbinary or masculine-of-center identities.

While it is too late now to correct the 2015 survey, I do hope you take this into consideration when choosing how to report your results and when designing future surveys.

winterkoninkje: shadowcrane (clean) (Default)

One of the folks I've chatted with a bunch online and finally got to meet in-person this year was Gershom. Towards the end of the symposium, he mentioned the post I linked to last time about my pulling away from Haskell communities. But the way he phrased it really struck me. And I wanted to comment on that.

I guess the crux of it comes down to, what do we mean when we say "the Haskell community"? What I meant when writing that post over a year ago was the online community of Haskell learners/practitioners (i.e., the folks who dominate spaces like reddit). But what Gershom's comment suggested was the academic community of FP researchers who work on/with Haskell (i.e., the folks who dominate spaces like ICFP). These two communities have always been intertwined, but as Haskell gains popularity they feel more and more distinct.

FWIW, I do still feel welcome as part of the academic community. There are some things I could comment on, sure, but those have more to do (i think) the general issue of being a woman in tech than they do with Haskell per se. Of course, this only underscores the concern of my original post. The free exchange between academics and practitioners was always a big part of what I've loved about the (general) Haskell community. Were not for that exchange, the spark planted by Mark Jones would never have fanned into the flame of my current research. As that free exchange unravels, this pathway from interested novice to researcher is cut off. And that pathway is crucial for the vibrancy of the community, as well as for addressing those general issues of being a minority in tech.

winterkoninkje: shadowcrane (clean) (Default)

Got back from Vancouver a couple days ago. The return flight was hell, but other than that the trip was pretty good. Got to meet a whole bunch of folks who read my blog (hi!), which was super cool. It always gives me warm fuzzies to know people are actually interested in my stuff. (You'd think I'd get used to it after a while, but no, evidently childhood scars still remain years after developing a solid sense of self-esteem. Go fig.) Of course, now I feel bad about not having written much of late. I have a few posts I've been meaning to write —mostly technical ones—, and I'm thinking I might should try to kick those out as a way to get back into the daily writing habit needed for finishing up my dissertation.

Since handing out my card at ICFP, I've been hacking around on my website. I'm still working on getting my publications/presentations to be properly formatted, and the timestamps in the footers on a couple pages are busted, but other than that I think it's in pretty good shape. I also took part in a keysigning party for the purpose of building a WoT as part of the developing story for how we securely deliver packages in Haskell. My key is up on Keybase as well as on my website (along with all my other contact info).

After a week of great talks, too much drinking, and too little sleep, I met up with a dear friend from undergrad. It's been far far too long since we've had a chance to hang out (alas, he couldn't make the wedding last year), but he's one of those friends you can just fall right back into. It's funny how much can change and yet stay the same. In one of our ambling walks I dragged him into a clothing store —something I never would've done pre-transition. He didn't seem to mind. He just went along like it's the sort of thing we've always done. And it felt to me like the sort of thing we could've always done. Recently he's been ref'ing for roller derby up in Victoria, and after talking about it I'm thinking I might try my hand at it. Sounds like a lot of fun. We have a team in Bloomington, and they're doing a training/recruiting thing, though alas I'll miss it since I'll be in Mountain View on the 14th. I'll see if I can't find some other way to get introduced.

winterkoninkje: shadowcrane (clean) (Default)

In the spirit of Brent's post, I figure I'll make a public announcement that I'm in Vancouver all week attending HOPE, ICFP, and the Haskell Symposium. I love reconnecting with old friends, as well as meeting new folks. Even if I've met you at past ICFPs, feel free to re-introduce yourself as ...things have changed over the past few years. I know I've been pretty quiet of late on Haskell cafe as well as here, but word on the street is folks still recognize my name around places. So if you want to meet up, leave a comment with when/where to find you, or just look for the tall gal with the blue streak in her hair.

Unlike Brent, and unlike in years past, I might should note that I am looking to "advance my career". As of this fall, I am officially on the market for research and professorship positions. So if you're interested in having a linguistically-minded constructive mathematician help work on your problems, come say hi. For those not attending ICFP, you can check out my professional site; I need to fix my script for generating the publications page, but you can find a brief background/research statement there along with the other usuals like CV, recent projects, and classes taught.

winterkoninkje: shadowcrane (clean) (Default)

These past few weeks I've been hacking up a storm. It's been fun, but I need to take a break. One of the big things I learned last fall is that I really need to take breaks. I was good about it all spring, but this summer I've been slipping into my old ways. Those workaholic ways. And when it gets that way, too often coding begins to feel like ripping out a piece of my soul. Or, not ripping. Purging. Whatever. In any case, not good. I much prefer the euphoric sorts of hacking sprees, where spilling my art out into the world feels more like a blessing, like gift giving, like spilling my heart does. So yeah. If it's not like that, then I need to take a break.

The last couple days I've been watching Sense8, which is amazing. It's been a long time since I've given anything five starts in Netflix. I've been watching some cool stuff, been giving lots of fours, but not fives. So yeah, total five star work. The Wachowskis are amazing. I never got all super into the Matrix like everyone else —I mean the first movie was fun and all— but yeah, between Jupiter Ascending and Sense8, they're just awesome. Should prolly see their rendition of Cloud Atlas; but I dunno if it could really live up to the book.

Speaking of breaks, and furies of activity: I'm flying out to Portland on saturday, will be there for a week. Then, when I get back I'm going to WOLLIC/CoCoNat for a week. And then, towards the end of the following week I start the interview process at Google. Then there's a couple weeks of calm before flying off to ICFP. And after that, I need to dig into the dissertation again. Really, I should be working on it now, but like I said: other coding has been eating at my brain. Really need to get it just squared away and out the door. Can't wait to fly off to Mountain View or wherever else is next in life.

winterkoninkje: shadowcrane (clean) (Default)

bytestring-lexing 0.5.0

The bytestring-lexing package offers extremely efficient bytestring parsers for some common lexemes: namely integral and fractional numbers. In addition, it provides efficient serializers for (some of) the formats it parses.

As of version 0.3.0, bytestring-lexing offers the best-in-show parsers for integral values. (According to the Warp web server's benchmark of parsing the Content-Length field of HTTP headers.) And as of this version (0.5.0) it offers (to my knowledge) the best-in-show parser for fractional/floating numbers.

Changes since 0.4.3 (2013-03-21)

I've completely overhauled the parsers for fractional numbers.

The old Data.ByteString.Lex.Double and Data.ByteString.Lex.Lazy.Double modules have been removed, as has their reliance on Alex as a build tool. I know some users were reluctant to use bytestring-lexing because of that dependency, and forked their own version of bytestring-lexing-0.3.0's integral parsers. This is no longer an issue, and those users are requested to switch over to using bytestring-lexing.

The old modules are replaced by the new Data.ByteString.Lex.Fractional module. This module provides two variants of the primary parsers. The readDecimal and readExponential functions are very simple and should suffice for most users' needs. The readDecimalLimited and readExponentialLimited are variants which take an argument specifying the desired precision limit (in decimal digits). With care, the limited-precision parsers can perform far more efficiently than the unlimited-precision parsers. Performance aside, they can also be used to intentionally restrict the precision of your program's inputs.

Benchmarks

The Criterion output of the benchmark discussed below, can be seen here. The main competitors we compare against are the previous version of bytestring-lexing (which already surpassed text and attoparsec/scientific) and bytestring-read which was the previous best-in-show.

The unlimited-precision parsers provide 3.3× to 3.9× speedup over the readDouble function from bytestring-lexing-0.4.3.3, as well as being polymorphic over all Fractional values. For Float/Double: these functions have essentially the same performance as bytestring-read on reasonable inputs (1.07× to 0.89×), but for inputs which have far more precision than Float/Double can handle these functions are much slower than bytestring-read (0.30× 'speedup'). However, for Rational: these functions provide 1.26× to 1.96× speedup compared to bytestring-read.

The limited-precision parsers do even better, but require some care to use properly. For types with infinite precision (e.g., Rational) we can pass in an 'infinite' limit by passing the length of the input string plus one. For Rational: doing so provides 1.5× speedup over the unlimited-precision parsers (and 1.9× to 3× speedup over bytestring-read), because we can avoid intermediate renormalizations. Whether other unlimited precision types would see the same benefit remains an open question.

For types with inherently limited precision (e.g., Float/Double), we could either pass in an 'infinite' limit or we could pass in the actual inherent limit. For types with inherently limited precision, passing in an 'infinite' limit degrades performance compared to the unlimited-precision parsers (0.51× to 0.8× 'speedup'). Whereas, passing in the actual inherent limit gives 1.3× to 4.5× speedup over the unlimited-precision parsers. They also provide 1.2× to 1.4× speedup over bytestring-read; for a total of 5.1× to 14.4× speedup over bytestring-lexing-0.4.3.3!

Links

winterkoninkje: shadowcrane (clean) (Default)

I've been working on culling a bunch of my old email addresses and unifying my username across the internet. I've mentioned a number of these changes before, but every few months I run into folks who've missed the memo. So, for those who keep up with my blog or follow Haskell Planet, here's the latest rundown:

Emails
  • wren@freegeek.org —is dead
  • wrnthorn@indiana.edu —is dead
  • phreelance@yahoo.com —will soon be dead
  • My Perl forwarding address lives— wren@cpan.org
  • My Haskell forwarding address lives— wren@community.haskell.org
  • My current work address is— wrengr@indiana.edu
  • My personal email at gmail.com is also lives
Code Hosting & Other User Names
  • My BitBucket username has changed from winterkoninkje to wrengr
  • My GitHub username has changed from winterkoninkje to wrengr
  • My IRC username has changed from koninkje to wrengr
winterkoninkje: shadowcrane (clean) (Default)

I have a new academic/professional website: http://cl.indiana.edu/~wren/!

There are still a few unfinished areas (e.g., my publications page), but hopefully I'll be finished with them shortly. The new site it built with Hakyll instead of my old Google-Summer-of-Code static website generator. I'm still learning how to implement my old workflow in Hakyll, and if I get the chance I'll write a few posts on how I've set things up. Reading through other folks' posts on how they use Hakyll have been very helpful, and I'd like to give back to the community. I've already issued a pull request for adding two new combinators for defining template fields.

In the meantime, if you notice any broken links from my old blog posts, please let me know.

winterkoninkje: shadowcrane (clean) (Default)

I recently got a copy of the Mass Effect trilogy. And playing through it, I've been thinking a lot lately about relationships in video games. Not just romances; but yes, the romances. Not too long ago Mattie Brice wrote a piece on romance in roleplaying games, which also fuels this whole train of thought.

My deep desire is to see games take relationships seriously.

Taking relationships seriously means taking queer relationships seriously. And I don't mean having that token same-sex option which falls into all the same tropes as the hetero options. And yet some games can't even work themselves up to this barest nodding glance in the vague direction of where queer relationships might lie (ME2 I'm side-eyeing you). I always play the same-sex options. Because I want to, sure, but in truth because they're typically less painful than the alternatives. And though I'm disappointed every time, I still secretly nourish that yearning hope that maybe this time they might actually be, even if only in faint moments, queer.

Taking queer relationships seriously means taking sexuality seriously. Means acknowledging that not everyone falls squarely under labels like "lesbian", "bi", or "straight". Where are those who fall between? The heteroflexible people who can't treat their non-straight relationships with the same committed seriousness, can't treat them as "real"? Where are the protagonists who fall for those cads only to get their hearts broken? Where are the people who "know" they're straight or gay, but nevertheless find themselves falling for someone outside those narrow confines? The straight men suddenly unabashed to throw his heart before another man, when he's the right man? The dykes suddenly questioning entire lives, worrying about loosing cred with the circle of friends they've built lives up with? And where is the closet? Where are the queers too scared to express themselves, hiding behind heteronormative masques? Where are the lovers able to see through their façades, lovers safe enough or dangerous enough to break down the walls of silence? Heroes who open closet doors to expose a whole new life free from that secret fear? Villains who expose the lie, who beckon and seduce but leave vulnerable in the streets? And where are the queers who live in open closets, the kind who live mostly-straight lives but don't talk about what goes on beneath the sheets, the kind who will only flirt in solitary spaces and only then once they've discerned both your proclivities and your discretion?

The call for queer relationships is not a call for moar gayseks. Relationships are queer. The call for queer relationships is the call for real relationships. The kind of relationships that change you. People must learn to bend before they can learn to embrace. And sometimes bending, means breaking. Queerness is not about sex; queerness is about certain complexities in what it is to be human, complexities that emerge once we become willing to take human lives and their stories seriously. Where do we find these stories of growth and loss and adaptation? We're willing to tell them and see them in books and in movies. The fact that they are so suspiciously lacking in games tells us we don't take games seriously. Games aren't a medium for telling stories. Games can't make us cry and scream, can't make us feel joy or loss. Games can't expose us to truth, can't change the ways we see the world, can't make us question long-held beliefs. Games can't break open our hearts and minds, can't be carried around within us, can't get under our skin, worm their way into thoughts, leaving painful burrows begging to be filled, by a way of living, by a faith, by more stories. Games can't, because we won't let them.

Taking relationships seriously means taking human lives seriously. Relationships are not built by accumulating points. It's not a matter of giving enough gifts, or the right gifts, or parroting the right lines at regular intervals. Connections are not made between people who agree entirely in all things. Connections are made through conflict: through shared suffering at the hands of external conflict, and through the delicate interpersonal conflict of establishing and relinquishing personal boundaries. Conflict exposes vulnerability, and when that exposure is intentional, when that vulnerability is respected and reciprocated, emotional connections are formed. Which is why we so often hate those who see us break down. Which is why the wrong gaffe or an unintentional insult can destroy a relationship. Which is why miscommunications matter so much.

Which is why human relationships are never binary. Jealousy is not a problem for couples, there is always a third, an intrusive third whose feet disrupt the lines we spent so long to draw. There can be no lingering looks nor wandering eyes without someone, something, to be held in those eyes. We love and hate those who remind us of who we've loved, of who we've been. Serial monogamy is not monogamous. Every lover we've ever taken is still within us; without them we would not be who we have become. Our new relationships are formed as much around the scars of past relationships as they are around the current participants. Who we allow ourselves to love depends as much on who we allow society to see us love as it does on whom we actually cherish. To elide the community within which a relationship forms is to elide the content and significance of that relationship. To say nothing of our elision of the plurality of relationships. Even in our monogamist culture we admit that romances exist alongside other intimacies: deep friendships formed around a shared survival of violence, abuse, or loss; familial bonds in cultures where family still means something; camaraderie in work or revolution. But the stories our games our allowed to tell elide these other intimacies as surely as they elide the possibility of being in faithful committed sexual/romantic relationships with more than one partner.

Taking relationships seriously means seeing them as a state of being, not as a destination. Game romances are (almost1) invariably fixated on consummation— either having sex for the first time, just before the last mission; or giving birth after having gotten married. This focus on consummation is profoundly sexist. It reiterates the fuck-'em-and-leave-'em ideology which prevents women and forbids men from forming healthy lasting relationships. It reinscribes the toxic ideology that intimacy can only mean sex, and that the only purpose of sex is procreation. Not only is this focus on consummation sexist, it does a profound disservice to the characters involved. The sexism not only objectifies real-world humans, denying them a rich inner life in which relationships can actually bear meaning, it also objectifies the characters, rendering them into caricatures, denying them the possibility of being well-rounded well-textured images capable of signifying the rich complexity of real-world relationships.

Imagine. In ME3 when Shepard starts having nightmares, imagine if rather than sitting on the edge of the bed trying to comfort herself in a dark and empty room, imagine if instead her lover were there to comfort her in those moments. Imagine how much more meaningful it would be to see the impact of those nightmares on their relationship, to see how those moments of intimacy help Shepard keep it together. Or if she doesn't have a lover at that point, then imagine how much more meaning there would be in that dark and empty room, in the isolation of being a hero, in the hollow silences of all the dead she's left behind. From a programming and scriptwriting perspective, adding the lover into these scenes would be trivial; and yet, this moment for enriching the story is overlooked because relationships are only about consummation, not about shared lives.

This is not simply a call for a less sexist society and for healthy attitudes towards relationships, this is a call for game-writers to stand up and embrace games as a valid literary medium. Just think of all the stories games deny themselves by refusing to address the full spectrum of sexual identities, by refusing to address relationships that change the participants, by refusing to address the many different forms of intimacy or the plurality of our relationships, by refusing to consider the communities in which these relationships are formed, by refusing to explore everything that happens after the first time they have sex. Even in our sexist society we already tell these stories in other media, so why not in games?


[1] With a few notable exceptions like the Persona series and Catherine.

winterkoninkje: shadowcrane (clean) (Default)

The past couple weeks I've been teaching about recursive types and their encodings in B522. Here's a short annotated bibliography for followup reading:

  • For a basic intro to recursive types, and for the set-theoretic metatheory: see section IV, chapters 20 and 21.
    • Benjamin C. Pierce (2002) Types and Programming Languages. MIT Press.
  • The proof of logical inconsistency and non-termination is "well-known". For every type τ we can define a fixed-point combinator and use that to exhibit an inhabitant of the type:
    • fixτ = λf:(τ→τ). let e = λx:(μα.α→τ). f (x (unroll x)) in e (roll(μα.α→τ) e)
    • τ = fixτ (λx:τ. x)
  • A category-theoretic proof that having fixed-points causes inconsistency
  • The proof of Turing-completeness is "well-known". Here's a translation from the untyped λ-calculus to STLC with fixed-points:
    • (x)* = x
    • (λx. e)* = roll(μα.α→α) (λx:(μα.α→α). e*)
    • (f e)* = unroll (f*) (e*)
  • Knaster–Tarski (1955): For any monotone function, f, (a) the least fixed-point of f is the intersection of all f-closed sets, and (b) the greatest fixed-point of f is the union of all f-consistent sets.
  • For a quick introduction to category theory, a good place to start is:
    • Benjamin C. Pierce (1991) Basic Category Theory for Computer Scientists. MIT Press.
  • For a more thorough introduction to category theory, consider:
  • The Boehm–Berarducci encoding
  • Church/Boehm–Berarducci encodings are not canonical
    • [citation needed]
  • Surjective pairing cannot be encoded in STLC (i.e., the implicational fragment of intuitionistic propositional logic): see p.155
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism. Studies in Logic and the Foundations of Mathematics, v.149.
  • However, adding it is a conservative extension
  • Compiling data types with Scott encodings
  • For more on the difference between Scott and Mogensten–Scott encodings:
  • Parigot encodings
    • M. Parigot (1988) Programming with proofs: A second-order type theory. ESOP, LNCS 300, pp.145–159. Springer.
  • For more on catamorphisms, anamorphisms, paramorphisms, and apomorphisms
  • build/foldr list fusion
    • Andy Gill, John Launchbury, and Simon Peyton Jones (1993) A short cut to deforestation. In Proc. Functional Programming Languages and Computer Architecture, pp.223–232.
    • Many more links at the bottom of this page
  • For another general introduction along the lines of what we covered in class
  • "Iterators" vs "recursors" in Heyting arithmetic and Gödel's System T: see ch.10:
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism Studies in Logic and the Foundations of Mathematics, v.149.
  • There are a great many more papers by Tarmo Uustalu, Varmo Vene, Ralf Hinze, and Jeremy Gibbons on all this stuff; just google for it.
winterkoninkje: shadowcrane (clean) (Default)

(Originally published as a comment to a friend, desiring to change the world and despairing to complain)

The feeling of not being allowed to complain is one of the primary ways that abusive settings perpetuate themselves. Complaints are a symptom that things are wrong, and implicitly a demand for things to change, so toxic communities do their best to indoctrinate people into the belief that "complainers" are bad people, that complaint is "a waste of time", that complaint is a sign of "weakness", that one's complaints are "frivolous", that one does not "deserve" to complain, that one does not have the "right" or the "authority" to complain, and so forth. It is a self-reinforcing process since to call it into question is to subject oneself to the accusations themselves.

To eschew complaint and wait for things to improve is to buy into the very system that estranges you. There is no glory in posing as the stoic hero; you cannot help others when you are reeling from your own wounds. Our glorification of solitary heroes is but another way of keeping people silent. There are no solitary heroes, there never have been; there are only communities of support. History chooses its heroes out of those communities in order to erase the community, to undermine the very support it gives. There are no "great men" in history, there are only great communities and great movements and the handful of names that get remembered.

Know this: you are always allowed complain. You are always allowed to exhibit your humanity, your pain, your unwillingness to suffer. That others may be worse off does not negate your own suffering. That others stay quiet —or have their voices silenced— does not negate the pain.

winterkoninkje: shadowcrane (clean) (Default)

I don't think I ever mentioned it here but, last semester I took a much-needed sabbatical. The main thing was to take a break from all the pressures of work and grad school and get back into a healthy headspace. Along the way I ended up pretty much dropping off the internet entirely. So if you've been missing me from various mailing lists and online communities, that's why. I'm back now. If you've tried getting in touch by email, irc, etc, and don't hear from me in the next few weeks, feel free ping me again.

This semester I'm teaching foundations of programming language theory with Jeremy Siek, and work on the dissertation continues apace. Over the break I had a few breakthrough moments thinking about the type system for chiastic lambda-calculi— which should help to clean up the normal forms for terms, as well as making room for extending the theory to include eta-conversion. Once the dust settles a bit I'll write some posts about it, as well as continuing the unification-fd tutorial I started last month.

Profile

winterkoninkje: shadowcrane (clean) (Default)
wren gayle romano

February 2016

S M T W T F S
 12 3456
78910111213
14151617181920
21222324252627
2829     

Common Tags