winterkoninkje: shadowcrane (clean) (Default)

I'm sick. I shouldn't be online. But just wanted to prattle on about a thing that'd take too long on twitter. A day or two ago I came across a linguist being cited somewhere in some article about celebrity couple name blends. In it they noted how certain syllables like "klol" and "prar" are forbidden in English. They phrased the restriction as forbidding CRVR (where C means a consonant, R means a liquid/sonorant —I forget how they phrased it—, and V a vowel).

There's something of merit going on here, but the specifics are far more complicated. Note that "slur" is perfectly acceptable. So, well maybe it's just that the C has to be a plosive. But then, "blur" is perfectly fine too. So, well maybe it's something special about how "-ur" forms a stand-alone rhotic vowel. But then "trill" and "drill" are just fine. So, well maybe...

winterkoninkje: shadowcrane (clean) (Default)

Earlier this year Neel Krishnaswami talked about abstract binding trees (ABTs) [part 1] [part 2]. IMO, the best way to think about ABTs is as a generalization of abstract syntax trees (ASTs), though this is not a perspective sanctioned by the CMUers I’ve talked to. CMUers oppose this way of phrasing things, in part, because the ABT libraries they’re familiar with make crucial use of the design pattern of two-level types; but I think the essential insights of ABTs and two-level types are quite different, and we ought to keep the benefits of these two techniques distinct.

Over the past year I’ve been working on the inferential language1 Hakaru, and in the new version of the compiler we’re using ABTs for our syntax trees. However, contrary to Neel’s stance against using strongly-typed internal representations for syntax, we extend the ABT approach to make use of GADTs to guarantee local well-typedness— since this in turn can be used to guarantee that program transformations are also well-typed. (If you don’t want those guarantees, then take a look at Jon Sterling’s abt library on Hackage2.) In this post I’m going to present a simplified version of our architecture, and then talk about some of the extra stuff bringing it closer to our production architecture.

First things first

Since we want everything to be well-typed, we first must introduce some universe, U, of all the types in our language. (In Haskell we can implement such a universe by using the -XDataKinds extension, so I’ll equivocate between calling U a “universe” vs a “kind”.) For the rest of this post it doesn’t actually matter what lives in that universe3, just so long as things match up when they need to. Since the choice of universe is irrelevant, we could abstract over U by turning on the -XPolyKinds extension; but I avoid doing so below, just to help keep things more concrete.

Implementing ASTs

The simplest way of thinking about well-typed ASTs is that they capture the set of terms generated by a (typed) signature; that is, the fixed point of some Σ [U] U . Unpacking the type for Σ, we have that every syntactic constructor sΣ is associated with some arity (the length of the list), each argument to s has some type in U (the elements of the list), and applying s to the right number of ASTs of the right types will generate a new AST with some type in U (the second argument to Σ).

To implement this fixed point we define an AST type which is parameterized by its signature. To ensure well-aritiedness (and well-typedness) of our ASTs with respect to that signature, we’ll need to introduce a helper type SArgs4. And to ensure that we obtain the least fixed-point of the signature, we’ll make everything strict.

infix  4 :$
infixr 5 :*

data SArgs  (U  )  [U]   where
    End   SArgs ast []

    (:*)  !(ast u)
          !(SArgs ast us)
          SArgs ast (u : us)

data AST  ([U]  U  )  U   where
    (:$)  !(σ us u)
          !(SArgs (AST σ) us)
          AST σ u

Implementing ABTs

The problem with ASTs is that they have no notion of variables, and thus have no notion of variable binding. Naively we could implement binders like lambda-abstraction by having something like λ Σ [u, v] (u :→ v) but then we’d need to do a post-hoc check to ensure that the first argument to λ is in fact a variable. To build that check into the datatype itself we’d have to move λ into the definition of AST (since the first argument is of type Variable u rather than AST Σ u). If lambda-abstraction were the only binder we had, that might not be so bad; but any real-world language has a plethora of binders, and this approach doesn’t scale.

The essential idea behind ABTs is to abstract over the notion of binding itself. Given a single uniform definition of what it means to be a binding form, we don’t have to worry about adding a bunch of ad-hoc constructors to our AST datatype. Moreover, we can then provide single uniform definitions for things which mess with variables and are homomorphic over the signature. Things like capture-avoiding substitution and providing a HOAS API for our first-order representation.

The crucial step is to adjust our notion of what a signature contains. The basic signatures used above only contained applicative forms; i.e., things we can apply to locally-closed terms; i.e., what are called “functors” in the logic programming community. For ABTs we’ll want to allow our signatures to include any generalized quantifier. That is, our signatures will now be of type Σ [[U] × U] U . Previously, the arguments were indexed by U; now, they’re indexed by [U] × U. The length of the list gives the number of variables being bound, the types in the list give the types of those variables, and the second component of the pair gives the type of the whole locally-open expression.

To implement this we need to extend our syntax tree to include variable bindings and variable uses:

data SArgs  ([U]  U  )  [[U] × U]   where
    End   SArgs abt []

    (:*)  !(abt vs u)
          !(SArgs abt vus)
          SArgs abt ((vs,u) : vus)

data ABT  ([[U] × U]  U  )  [U]  U   where
    (:$)  !(σ vus u)
          !(SArgs (ABT σ) vus)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Time for an example of how this all fits together. To add lambda-abstraction to our language we’d have λ Σ [([u],v)] (u :→ v): that is, the λ constructor takes a single argument which is a locally-open term, binding a single variable of type u, and whose body has type v. So given some x Variable u and e ABT Σ [] v we’d have the AST (λ :$ Bind x e :* End) ABT Σ [] (u :→ v).

“Local” vs “global” well-typedness

With the ABT definition above, every term of type ABT Σ vs u must be locally well-typed according to the signature Σ. I keep saying “locally” well-typed because we only actually keep track of local binding information. This is an intentional design decision. But only tracking local well-typedness does have some downsides.

So what are the downsides? Where could things go wrong? Given a locally-closed term (i.e., either Var x or f :$ e) any free variables that occur inside will not have their U-types tracked by Haskell’s type system. This introduces some room for the compiler writer to break the connection between the types of a variable’s binder and its use. That is, under the hood, every variable is represented by some unique identifier like an integer or a string. Integers and strings aren’t U-indexed Haskell types, thus it’s possible to construct a Variable u and a Variable v with the same unique identifier, even though u and v differ. We could then Bind the Variable u but Var the Variable v. In order to ensure global well-typedness we need to ensure this can’t happen.

One way is to keep track of global binding information, as we do in the paper presentation of languages. Unfortunately, to do this we’d need to teach Haskell’s typechecker about the structural rules of our language. Without a type-level implementation of sets/maps which respects all the axioms that sets/maps should, we’d be forced to do things like traverse our ASTs and rebuild them identically, but at different type indices. This is simply too hairy to stomach. Implementing the axioms ourselves is doubly so.

Or we could fake it, using unsafeCoerce to avoid the extraneous traversals or the complicated pattern matching on axioms. But doing this we’d erase all guarantees that adding global binding information has to offer.

A third approach, and the one we take in Hakaru, is compartmentalize the places where variables can be constructed. The variable generation code must be part of our trusted code base, but unlike the unsafeCoerce approach we can keep all the TCB code together in one spot rather than spread out across the whole compiler.

Stratifying our data types

The above definition of ABTs is a simplified version of what we actually use in Hakaru. For example, Hakaru has user-defined algebraic data types, so we also need case analysis on those data types. Alas, generic case analysis is not a generalized quantifier, thus we cannot implement it with (:$). We could consider just adding case analysis to the ABT definition, but then we’d start running into extensibility issues again. Instead, we can break the ABT type apart into two types: one for capturing variable uses and bindings, and the other for whatever syntax we can come up with. Thus,

data Syntax  ([[U] × U]  U  )  ([U]  U  )  U   where
    (:$)  !(σ vus u)
          !(SArgs abt vus)
          Syntax σ abt [] u

data ABT  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ (ABT σ) u)
          ABT σ [] u

    Var   !(Variable v)
          ABT σ [] v

    Bind  !(Variable v)
          !(ABT σ vs u)
          ABT σ (v : vs) u

Of course, since we’re going to be extending Syntax with all our language-specific details, there’s not a whole lot of benefit to parameterizing over σ. Thus, we can simplify the types considerably by just picking some concrete Σ to plug in for σ.

By breaking Syntax apart from ABT we can now extend our notion of syntax without worrying about the details of variable binding (which can be defined once and for all on ABT). But we could still run into extensibility issues. In particular, often we want to separate the fixed-point portion of recursive types from their generating functor so that we can do things like add annotations at every node in the recursive data type. A prime example of such annotations is keeping track of free variables, as in Neel’s original post. To allow this form of extensibility we need to break up the ABT type into two parts: the recursion, and the Syn/Var/Bind view of the ABT.

data ABT  ([U]  U  )  [U]  U   where
    Unview  !(View σ (ABT σ) vs u)  ABT σ vs u

view  ABT σ vs u  View σ (ABT σ) vs u
view (Unview e) = e

data View  ([U]  U  )  [U]  U   where
    Syn   !(Syntax σ abt u)
          View σ abt [] u

    Var   !(Variable v)
          View σ abt [] v

    Bind  !(Variable v)
          !(View σ abt vs u)
          View σ abt (v : vs) u

Now, to allow arbitrary annotations we’ll replace the data type ABT with an equivalent type class. Each instance of the ABT class defines some sort of annotations, and we can use the view and unview methods to move between the instance and the concrete View type.

There’s one last form of extensibility we may want to add. Using fixed point combinators gives us a way of describing complete trees. A different way of introducing recursion is with free monads. The free-monad combinator is just like the fixed-point combinator, except that we have an additional type parameter for metavariables and we have a data constructor for using those metavariables instead of requiring the recursion to ground out with a complete syntax tree. The reasons why this might be nice to do are beyond the scope of this post, but the point is we might want to do that so we need to split the ABT class into two parts: one for the recursion itself, and another for the annotations.

In the end, we have a four-level type: the Syntax, the View, the annotations, and the recursion.

[1] In the accepted/current parlance, Hakaru is a “probabilistic programming language”; but I and a number of other folks working on such languages have become disaffected with that term of late, since it’s not entirely clear what should and should not count as a “probabilistic” PL. Over the course of a number of discussions on the topic, I’ve settled on “inferential” PL as describing what is (or, at least what I find) interesting about “probabilistic” PL. I’ve been meaning to write a post about the subject, and hopefully this footnote will remind me to do so.

[2] N.B., the indexing used in that package is what we get if we erase/compactify the universe U. That is: the erasure of U is a singleton set; the erasure of [U] is isomorphic to the Peano numbers; the erasure of [[U] × U] is isomorphic to a list of Peano numbers; etc.

[3] Though at one point I assume we have functions, (:→), just for the sake of an example.

[4] Ideally we’d be able to flatten this type to avoid all the overhead of the linked list implementation. In fact, the entire AST node of (:$) together with its SArgs should be flattened. These nodes have the same general layout as the heap objects in the STG machine: a record with a pointer to the data constructor (i.e., element of the signature) followed by an appropriate number of arguments; and so in principle we ought to be able to implement them directly with a single STG heap object.

winterkoninkje: shadowcrane (clean) (Default)


I took the US Trans Survey a while back and had some feedback, though I forgot to include it in the survey itself. In particular, my comment is about the gender/identity portion of the survey (question 2.2). It's great that you include a variety of nonbinary identities (fa'afafine, mahu, genderqueer, genderfluid, two-spirit, third gender,...) but I noticed a severe bias in terms of the more binary options. You offer a variety of options for masculine-of-center folks (AG, butch, stud,...) but you offer zero options for feminine-of-center folks. This contributes to the ongoing problem of femme/feminine erasure in feminist and LGBT communities. I identify very strongly as femme, in fact it's one of the few labels I "identify" with— far moreso than I "identify" with trans, fwiw.

While I did write in under the "other" option, I worry that this only serves to contribute to the ongoing erasure and underrepresentation of feminine identities. Having worked on a number of these sorts of surveys, here's a short list of reasons why:

  • Feminine-of-center folks will be undercounted because many will not feel strongly enough to write in.
  • The presentation of specific choices for masculine-of-center identities helps to funnel people towards particular terms, improving their statistical significance; whereas the lack of explicit options for feminine-of-center folks will result in a larger diversity of terms, thus decreasing each term's likelihood of reaching significance.
  • Attempting to correct for the previous point by aggregating all feminine-of-center identities serves to erase the diversity of femme/feminine identities: "femme" forms no more a singular identity than AG/butch/stud/etc forms a singular identity. The distinctions between high femme, hard femme, tomboy femme, etc are just as important as the distinctions between varieties of nonbinary or masculine-of-center identities.

While it is too late now to correct the 2015 survey, I do hope you take this into consideration when choosing how to report your results and when designing future surveys.

winterkoninkje: shadowcrane (clean) (Default)

One of the folks I've chatted with a bunch online and finally got to meet in-person this year was Gershom. Towards the end of the symposium, he mentioned the post I linked to last time about my pulling away from Haskell communities. But the way he phrased it really struck me. And I wanted to comment on that.

I guess the crux of it comes down to, what do we mean when we say "the Haskell community"? What I meant when writing that post over a year ago was the online community of Haskell learners/practitioners (i.e., the folks who dominate spaces like reddit). But what Gershom's comment suggested was the academic community of FP researchers who work on/with Haskell (i.e., the folks who dominate spaces like ICFP). These two communities have always been intertwined, but as Haskell gains popularity they feel more and more distinct.

FWIW, I do still feel welcome as part of the academic community. There are some things I could comment on, sure, but those have more to do (i think) the general issue of being a woman in tech than they do with Haskell per se. Of course, this only underscores the concern of my original post. The free exchange between academics and practitioners was always a big part of what I've loved about the (general) Haskell community. Were not for that exchange, the spark planted by Mark Jones would never have fanned into the flame of my current research. As that free exchange unravels, this pathway from interested novice to researcher is cut off. And that pathway is crucial for the vibrancy of the community, as well as for addressing those general issues of being a minority in tech.

winterkoninkje: shadowcrane (clean) (Default)

Got back from Vancouver a couple days ago. The return flight was hell, but other than that the trip was pretty good. Got to meet a whole bunch of folks who read my blog (hi!), which was super cool. It always gives me warm fuzzies to know people are actually interested in my stuff. (You'd think I'd get used to it after a while, but no, evidently childhood scars still remain years after developing a solid sense of self-esteem. Go fig.) Of course, now I feel bad about not having written much of late. I have a few posts I've been meaning to write —mostly technical ones—, and I'm thinking I might should try to kick those out as a way to get back into the daily writing habit needed for finishing up my dissertation.

Since handing out my card at ICFP, I've been hacking around on my website. I'm still working on getting my publications/presentations to be properly formatted, and the timestamps in the footers on a couple pages are busted, but other than that I think it's in pretty good shape. I also took part in a keysigning party for the purpose of building a WoT as part of the developing story for how we securely deliver packages in Haskell. My key is up on Keybase as well as on my website (along with all my other contact info).

After a week of great talks, too much drinking, and too little sleep, I met up with a dear friend from undergrad. It's been far far too long since we've had a chance to hang out (alas, he couldn't make the wedding last year), but he's one of those friends you can just fall right back into. It's funny how much can change and yet stay the same. In one of our ambling walks I dragged him into a clothing store —something I never would've done pre-transition. He didn't seem to mind. He just went along like it's the sort of thing we've always done. And it felt to me like the sort of thing we could've always done. Recently he's been ref'ing for roller derby up in Victoria, and after talking about it I'm thinking I might try my hand at it. Sounds like a lot of fun. We have a team in Bloomington, and they're doing a training/recruiting thing, though alas I'll miss it since I'll be in Mountain View on the 14th. I'll see if I can't find some other way to get introduced.

winterkoninkje: shadowcrane (clean) (Default)

In the spirit of Brent's post, I figure I'll make a public announcement that I'm in Vancouver all week attending HOPE, ICFP, and the Haskell Symposium. I love reconnecting with old friends, as well as meeting new folks. Even if I've met you at past ICFPs, feel free to re-introduce yourself as ...things have changed over the past few years. I know I've been pretty quiet of late on Haskell cafe as well as here, but word on the street is folks still recognize my name around places. So if you want to meet up, leave a comment with when/where to find you, or just look for the tall gal with the blue streak in her hair.

Unlike Brent, and unlike in years past, I might should note that I am looking to "advance my career". As of this fall, I am officially on the market for research and professorship positions. So if you're interested in having a linguistically-minded constructive mathematician help work on your problems, come say hi. For those not attending ICFP, you can check out my professional site; I need to fix my script for generating the publications page, but you can find a brief background/research statement there along with the other usuals like CV, recent projects, and classes taught.

winterkoninkje: shadowcrane (clean) (Default)

These past few weeks I've been hacking up a storm. It's been fun, but I need to take a break. One of the big things I learned last fall is that I really need to take breaks. I was good about it all spring, but this summer I've been slipping into my old ways. Those workaholic ways. And when it gets that way, too often coding begins to feel like ripping out a piece of my soul. Or, not ripping. Purging. Whatever. In any case, not good. I much prefer the euphoric sorts of hacking sprees, where spilling my art out into the world feels more like a blessing, like gift giving, like spilling my heart does. So yeah. If it's not like that, then I need to take a break.

The last couple days I've been watching Sense8, which is amazing. It's been a long time since I've given anything five starts in Netflix. I've been watching some cool stuff, been giving lots of fours, but not fives. So yeah, total five star work. The Wachowskis are amazing. I never got all super into the Matrix like everyone else —I mean the first movie was fun and all— but yeah, between Jupiter Ascending and Sense8, they're just awesome. Should prolly see their rendition of Cloud Atlas; but I dunno if it could really live up to the book.

Speaking of breaks, and furies of activity: I'm flying out to Portland on saturday, will be there for a week. Then, when I get back I'm going to WOLLIC/CoCoNat for a week. And then, towards the end of the following week I start the interview process at Google. Then there's a couple weeks of calm before flying off to ICFP. And after that, I need to dig into the dissertation again. Really, I should be working on it now, but like I said: other coding has been eating at my brain. Really need to get it just squared away and out the door. Can't wait to fly off to Mountain View or wherever else is next in life.

winterkoninkje: shadowcrane (clean) (Default)

bytestring-lexing 0.5.0

The bytestring-lexing package offers extremely efficient bytestring parsers for some common lexemes: namely integral and fractional numbers. In addition, it provides efficient serializers for (some of) the formats it parses.

As of version 0.3.0, bytestring-lexing offers the best-in-show parsers for integral values. (According to the Warp web server's benchmark of parsing the Content-Length field of HTTP headers.) And as of this version (0.5.0) it offers (to my knowledge) the best-in-show parser for fractional/floating numbers.

Changes since 0.4.3 (2013-03-21)

I've completely overhauled the parsers for fractional numbers.

The old Data.ByteString.Lex.Double and Data.ByteString.Lex.Lazy.Double modules have been removed, as has their reliance on Alex as a build tool. I know some users were reluctant to use bytestring-lexing because of that dependency, and forked their own version of bytestring-lexing-0.3.0's integral parsers. This is no longer an issue, and those users are requested to switch over to using bytestring-lexing.

The old modules are replaced by the new Data.ByteString.Lex.Fractional module. This module provides two variants of the primary parsers. The readDecimal and readExponential functions are very simple and should suffice for most users' needs. The readDecimalLimited and readExponentialLimited are variants which take an argument specifying the desired precision limit (in decimal digits). With care, the limited-precision parsers can perform far more efficiently than the unlimited-precision parsers. Performance aside, they can also be used to intentionally restrict the precision of your program's inputs.


The Criterion output of the benchmark discussed below, can be seen here. The main competitors we compare against are the previous version of bytestring-lexing (which already surpassed text and attoparsec/scientific) and bytestring-read which was the previous best-in-show.

The unlimited-precision parsers provide 3.3× to 3.9× speedup over the readDouble function from bytestring-lexing-, as well as being polymorphic over all Fractional values. For Float/Double: these functions have essentially the same performance as bytestring-read on reasonable inputs (1.07× to 0.89×), but for inputs which have far more precision than Float/Double can handle these functions are much slower than bytestring-read (0.30× 'speedup'). However, for Rational: these functions provide 1.26× to 1.96× speedup compared to bytestring-read.

The limited-precision parsers do even better, but require some care to use properly. For types with infinite precision (e.g., Rational) we can pass in an 'infinite' limit by passing the length of the input string plus one. For Rational: doing so provides 1.5× speedup over the unlimited-precision parsers (and 1.9× to 3× speedup over bytestring-read), because we can avoid intermediate renormalizations. Whether other unlimited precision types would see the same benefit remains an open question.

For types with inherently limited precision (e.g., Float/Double), we could either pass in an 'infinite' limit or we could pass in the actual inherent limit. For types with inherently limited precision, passing in an 'infinite' limit degrades performance compared to the unlimited-precision parsers (0.51× to 0.8× 'speedup'). Whereas, passing in the actual inherent limit gives 1.3× to 4.5× speedup over the unlimited-precision parsers. They also provide 1.2× to 1.4× speedup over bytestring-read; for a total of 5.1× to 14.4× speedup over bytestring-lexing-!


winterkoninkje: shadowcrane (clean) (Default)

I've been working on culling a bunch of my old email addresses and unifying my username across the internet. I've mentioned a number of these changes before, but every few months I run into folks who've missed the memo. So, for those who keep up with my blog or follow Haskell Planet, here's the latest rundown:

  • —is dead
  • —is dead
  • —will soon be dead
  • My Perl forwarding address lives—
  • My Haskell forwarding address lives—
  • My current work address is—
  • My personal email at is also lives
Code Hosting & Other User Names
  • My BitBucket username has changed from winterkoninkje to wrengr
  • My GitHub username has changed from winterkoninkje to wrengr
  • My IRC username has changed from koninkje to wrengr
winterkoninkje: shadowcrane (clean) (Default)

I have a new academic/professional website:!

There are still a few unfinished areas (e.g., my publications page), but hopefully I'll be finished with them shortly. The new site it built with Hakyll instead of my old Google-Summer-of-Code static website generator. I'm still learning how to implement my old workflow in Hakyll, and if I get the chance I'll write a few posts on how I've set things up. Reading through other folks' posts on how they use Hakyll have been very helpful, and I'd like to give back to the community. I've already issued a pull request for adding two new combinators for defining template fields.

In the meantime, if you notice any broken links from my old blog posts, please let me know.

winterkoninkje: shadowcrane (clean) (Default)

I recently got a copy of the Mass Effect trilogy. And playing through it, I've been thinking a lot lately about relationships in video games. Not just romances; but yes, the romances. Not too long ago Mattie Brice wrote a piece on romance in roleplaying games, which also fuels this whole train of thought.

My deep desire is to see games take relationships seriously.

Taking relationships seriously means taking queer relationships seriously. And I don't mean having that token same-sex option which falls into all the same tropes as the hetero options. And yet some games can't even work themselves up to this barest nodding glance in the vague direction of where queer relationships might lie (ME2 I'm side-eyeing you). I always play the same-sex options. Because I want to, sure, but in truth because they're typically less painful than the alternatives. And though I'm disappointed every time, I still secretly nourish that yearning hope that maybe this time they might actually be, even if only in faint moments, queer.

Taking queer relationships seriously means taking sexuality seriously. Means acknowledging that not everyone falls squarely under labels like "lesbian", "bi", or "straight". Where are those who fall between? The heteroflexible people who can't treat their non-straight relationships with the same committed seriousness, can't treat them as "real"? Where are the protagonists who fall for those cads only to get their hearts broken? Where are the people who "know" they're straight or gay, but nevertheless find themselves falling for someone outside those narrow confines? The straight men suddenly unabashed to throw his heart before another man, when he's the right man? The dykes suddenly questioning entire lives, worrying about loosing cred with the circle of friends they've built lives up with? And where is the closet? Where are the queers too scared to express themselves, hiding behind heteronormative masques? Where are the lovers able to see through their façades, lovers safe enough or dangerous enough to break down the walls of silence? Heroes who open closet doors to expose a whole new life free from that secret fear? Villains who expose the lie, who beckon and seduce but leave vulnerable in the streets? And where are the queers who live in open closets, the kind who live mostly-straight lives but don't talk about what goes on beneath the sheets, the kind who will only flirt in solitary spaces and only then once they've discerned both your proclivities and your discretion?

The call for queer relationships is not a call for moar gayseks. Relationships are queer. The call for queer relationships is the call for real relationships. The kind of relationships that change you. People must learn to bend before they can learn to embrace. And sometimes bending, means breaking. Queerness is not about sex; queerness is about certain complexities in what it is to be human, complexities that emerge once we become willing to take human lives and their stories seriously. Where do we find these stories of growth and loss and adaptation? We're willing to tell them and see them in books and in movies. The fact that they are so suspiciously lacking in games tells us we don't take games seriously. Games aren't a medium for telling stories. Games can't make us cry and scream, can't make us feel joy or loss. Games can't expose us to truth, can't change the ways we see the world, can't make us question long-held beliefs. Games can't break open our hearts and minds, can't be carried around within us, can't get under our skin, worm their way into thoughts, leaving painful burrows begging to be filled, by a way of living, by a faith, by more stories. Games can't, because we won't let them.

Taking relationships seriously means taking human lives seriously. Relationships are not built by accumulating points. It's not a matter of giving enough gifts, or the right gifts, or parroting the right lines at regular intervals. Connections are not made between people who agree entirely in all things. Connections are made through conflict: through shared suffering at the hands of external conflict, and through the delicate interpersonal conflict of establishing and relinquishing personal boundaries. Conflict exposes vulnerability, and when that exposure is intentional, when that vulnerability is respected and reciprocated, emotional connections are formed. Which is why we so often hate those who see us break down. Which is why the wrong gaffe or an unintentional insult can destroy a relationship. Which is why miscommunications matter so much.

Which is why human relationships are never binary. Jealousy is not a problem for couples, there is always a third, an intrusive third whose feet disrupt the lines we spent so long to draw. There can be no lingering looks nor wandering eyes without someone, something, to be held in those eyes. We love and hate those who remind us of who we've loved, of who we've been. Serial monogamy is not monogamous. Every lover we've ever taken is still within us; without them we would not be who we have become. Our new relationships are formed as much around the scars of past relationships as they are around the current participants. Who we allow ourselves to love depends as much on who we allow society to see us love as it does on whom we actually cherish. To elide the community within which a relationship forms is to elide the content and significance of that relationship. To say nothing of our elision of the plurality of relationships. Even in our monogamist culture we admit that romances exist alongside other intimacies: deep friendships formed around a shared survival of violence, abuse, or loss; familial bonds in cultures where family still means something; camaraderie in work or revolution. But the stories our games our allowed to tell elide these other intimacies as surely as they elide the possibility of being in faithful committed sexual/romantic relationships with more than one partner.

Taking relationships seriously means seeing them as a state of being, not as a destination. Game romances are (almost1) invariably fixated on consummation— either having sex for the first time, just before the last mission; or giving birth after having gotten married. This focus on consummation is profoundly sexist. It reiterates the fuck-'em-and-leave-'em ideology which prevents women and forbids men from forming healthy lasting relationships. It reinscribes the toxic ideology that intimacy can only mean sex, and that the only purpose of sex is procreation. Not only is this focus on consummation sexist, it does a profound disservice to the characters involved. The sexism not only objectifies real-world humans, denying them a rich inner life in which relationships can actually bear meaning, it also objectifies the characters, rendering them into caricatures, denying them the possibility of being well-rounded well-textured images capable of signifying the rich complexity of real-world relationships.

Imagine. In ME3 when Shepard starts having nightmares, imagine if rather than sitting on the edge of the bed trying to comfort herself in a dark and empty room, imagine if instead her lover were there to comfort her in those moments. Imagine how much more meaningful it would be to see the impact of those nightmares on their relationship, to see how those moments of intimacy help Shepard keep it together. Or if she doesn't have a lover at that point, then imagine how much more meaning there would be in that dark and empty room, in the isolation of being a hero, in the hollow silences of all the dead she's left behind. From a programming and scriptwriting perspective, adding the lover into these scenes would be trivial; and yet, this moment for enriching the story is overlooked because relationships are only about consummation, not about shared lives.

This is not simply a call for a less sexist society and for healthy attitudes towards relationships, this is a call for game-writers to stand up and embrace games as a valid literary medium. Just think of all the stories games deny themselves by refusing to address the full spectrum of sexual identities, by refusing to address relationships that change the participants, by refusing to address the many different forms of intimacy or the plurality of our relationships, by refusing to consider the communities in which these relationships are formed, by refusing to explore everything that happens after the first time they have sex. Even in our sexist society we already tell these stories in other media, so why not in games?

[1] With a few notable exceptions like the Persona series and Catherine.

winterkoninkje: shadowcrane (clean) (Default)

The past couple weeks I've been teaching about recursive types and their encodings in B522. Here's a short annotated bibliography for followup reading:

  • For a basic intro to recursive types, and for the set-theoretic metatheory: see section IV, chapters 20 and 21.
    • Benjamin C. Pierce (2002) Types and Programming Languages. MIT Press.
  • The proof of logical inconsistency and non-termination is "well-known". For every type τ we can define a fixed-point combinator and use that to exhibit an inhabitant of the type:
    • fixτ = λf:(τ→τ). let e = λx:(μα.α→τ). f (x (unroll x)) in e (roll(μα.α→τ) e)
    • τ = fixτ (λx:τ. x)
  • A category-theoretic proof that having fixed-points causes inconsistency
  • The proof of Turing-completeness is "well-known". Here's a translation from the untyped λ-calculus to STLC with fixed-points:
    • (x)* = x
    • (λx. e)* = roll(μα.α→α) (λx:(μα.α→α). e*)
    • (f e)* = unroll (f*) (e*)
  • Knaster–Tarski (1955): For any monotone function, f, (a) the least fixed-point of f is the intersection of all f-closed sets, and (b) the greatest fixed-point of f is the union of all f-consistent sets.
  • For a quick introduction to category theory, a good place to start is:
    • Benjamin C. Pierce (1991) Basic Category Theory for Computer Scientists. MIT Press.
  • For a more thorough introduction to category theory, consider:
  • The Boehm–Berarducci encoding
  • Church/Boehm–Berarducci encodings are not canonical
    • [citation needed]
  • Surjective pairing cannot be encoded in STLC (i.e., the implicational fragment of intuitionistic propositional logic): see p.155
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism. Studies in Logic and the Foundations of Mathematics, v.149.
  • However, adding it is a conservative extension
  • Compiling data types with Scott encodings
  • For more on the difference between Scott and Mogensten–Scott encodings:
  • Parigot encodings
    • M. Parigot (1988) Programming with proofs: A second-order type theory. ESOP, LNCS 300, pp.145–159. Springer.
  • For more on catamorphisms, anamorphisms, paramorphisms, and apomorphisms
  • build/foldr list fusion
    • Andy Gill, John Launchbury, and Simon Peyton Jones (1993) A short cut to deforestation. In Proc. Functional Programming Languages and Computer Architecture, pp.223–232.
    • Many more links at the bottom of this page
  • For another general introduction along the lines of what we covered in class
  • "Iterators" vs "recursors" in Heyting arithmetic and Gödel's System T: see ch.10:
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism Studies in Logic and the Foundations of Mathematics, v.149.
  • There are a great many more papers by Tarmo Uustalu, Varmo Vene, Ralf Hinze, and Jeremy Gibbons on all this stuff; just google for it.
winterkoninkje: shadowcrane (clean) (Default)

(Originally published as a comment to a friend, desiring to change the world and despairing to complain)

The feeling of not being allowed to complain is one of the primary ways that abusive settings perpetuate themselves. Complaints are a symptom that things are wrong, and implicitly a demand for things to change, so toxic communities do their best to indoctrinate people into the belief that "complainers" are bad people, that complaint is "a waste of time", that complaint is a sign of "weakness", that one's complaints are "frivolous", that one does not "deserve" to complain, that one does not have the "right" or the "authority" to complain, and so forth. It is a self-reinforcing process since to call it into question is to subject oneself to the accusations themselves.

To eschew complaint and wait for things to improve is to buy into the very system that estranges you. There is no glory in posing as the stoic hero; you cannot help others when you are reeling from your own wounds. Our glorification of solitary heroes is but another way of keeping people silent. There are no solitary heroes, there never have been; there are only communities of support. History chooses its heroes out of those communities in order to erase the community, to undermine the very support it gives. There are no "great men" in history, there are only great communities and great movements and the handful of names that get remembered.

Know this: you are always allowed complain. You are always allowed to exhibit your humanity, your pain, your unwillingness to suffer. That others may be worse off does not negate your own suffering. That others stay quiet —or have their voices silenced— does not negate the pain.

winterkoninkje: shadowcrane (clean) (Default)

I don't think I ever mentioned it here but, last semester I took a much-needed sabbatical. The main thing was to take a break from all the pressures of work and grad school and get back into a healthy headspace. Along the way I ended up pretty much dropping off the internet entirely. So if you've been missing me from various mailing lists and online communities, that's why. I'm back now. If you've tried getting in touch by email, irc, etc, and don't hear from me in the next few weeks, feel free ping me again.

This semester I'm teaching foundations of programming language theory with Jeremy Siek, and work on the dissertation continues apace. Over the break I had a few breakthrough moments thinking about the type system for chiastic lambda-calculi— which should help to clean up the normal forms for terms, as well as making room for extending the theory to include eta-conversion. Once the dust settles a bit I'll write some posts about it, as well as continuing the unification-fd tutorial I started last month.

winterkoninkje: shadowcrane (clean) (Default)

A while back I released the unification-fd library, which gives a generic implementation of first-order unification of non-cyclic terms. I've given a few talks on how the library is implemented and what optimizations it performs, but that's not the topic for today. Today, I'm going to talk about how to use it.

Unification is a widely useful operation and, consequently, comes in many different flavors. The version currently supported by the library is the sort used by logic programming languages like Prolog, Curry, Dyna, and MiniKanren; which is the same sort that's used for unification-based type inference algorithms like Hindley–Damas–Milner. Of these two examples, the logic programming example is the simpler one to discuss— at least for folks who've used a language like Prolog before. So let's start from there.

Caveat Emptor: This post is something of a stream of consciousness. I've gotten a few requests for tutorials on how to use the library, but the requests weren't terribly specific about what problems people've had or what's been difficult to figure out. So I'm shooting in the dark as far as what folks need and how much background they have. I'm going to assume you're familiar with Prolog and the basics of what unification is and does.

Preemptive apology: I started writing this post months and months (and months) ago, but unintentionally dropped it after running into a certain issue and then getting distracted and moving onto other things. Actually, this happened at least twice. I'm terribly sorry about that. So, apologies for not tackling the disjunction issue in this post. I'll come back to it later, but figured this post really needs to get out the door already.

Logic Terms

A term, in Prolog, is just a fancy name for a value of some algebraic data type. In most variants of Prolog there's no explicit definition of the ADT, no restriction on what the constructors are, and no type checking to ensure that subterms have a particular shape. That is, Prolog is what's called a single-sorted logic; in other words, Prolog is an untyped/unityped language. With unification-fd we can implement multi-sorted (aka typed) logics, but for this tutorial we're going to stick with Prolog's single-sorted approach.

Opening up Control.Unification we'll see a handful of types and type classes, followed by a bunch of operators. The UTerm data type captures the recursive structure of logic terms. (UTerm is the free monad, if you're familiar with that terminology.) That is, given some functor t which describes the constructors of our logic terms, and some type v which describes our logic variables, the type UTerm t v is the type of logic terms: trees with multiple layers of t structure and leaves of type v. For our single-sorted logic, here's an implementation of t:

data T a = T String [a]

The String gives the name of the term constructor, and the list gives the ordered sequence of subterms. Thus, the Prolog term foo(bar,baz(X)) would be implemented as UTerm$T "foo" [UTerm$T "bar" [], UTerm$T "baz" [UVar x]]. If we're going to be building these terms directly, then we probably want to define some smart constructors to reduce the syntactic noise:

foo x y = UTerm$T "foo" [x,y]
bar     = UTerm$T "bar" []
baz x   = UTerm$T "baz" [x]

Now, we can implement the Prolog term as foo bar (baz x). If you prefer a more Prolog-like syntax, you can use uncurried definitions for smart constructors that take more than one argument.


In order to use our T data type with the rest of the API, we'll need to give a Unifiable instance for it. Before we do that we'll have to give Functor, Foldable, and Traversable instances. These are straightforward and can be automatically derived with the appropriate language pragmas.

The Unifiable class gives one step of the unification process. Just as we only need to specify one level of the ADT (i.e., T) and then we can use the library's UTerm to generate the recursive ADT, so too we only need to specify one level of the unification (i.e., zipMatch) and then we can use the library's operators to perform the recursive unification, subsumption, etc.

The zipMatch function takes two arguments of type t a. The abstract t will be our concrete T type. The abstract a is polymorphic, which ensures that we can't mess around with more than one level of the term at once. If we abandon that guarantee, then you can think of it as if a is UTerm T v. Thus,t a means T (UTerm T v); and T (UTerm T v) is essentially the type UTerm T v with the added guarantee that the values aren't in fact variables. Thus, the arguments to zipMatch are non-variable terms.

The zipMatch method has the rather complicated return type: Maybe (t (Either a (a,a))). Let's unpack this a bit by thinking about how unification works. When we try to unify two terms, first we look at their head constructors. If the constructors are different, then the terms aren't unifiable, so we return Nothing to indicate that unification has failed. Otherwise, the constructors match, so we have to recursively unify their subterms. Since the T structures of the two terms match, we can return Just t0 where t0 has the same T structure as both input terms. Where we still have to recursively unify subterms, we fill t0 with Right(l,r) values where l is a subterm of the left argument to zipMatch and r is the corresponding subterm of the right argument. Thus, zipMatch is a generalized zipping function for combining the shared structure and pairing up substructures. And now, the implementation:

instance Unifiable T where
    zipMatch (T m ls) (T n rs)
        | m /= n    = Nothing
        | otherwise =
            T n <$> pairWith (\l r -> Right(l,r)) ls rs

Where list-extras:Data.List.Extras.Pair.pairWith is a version of zip which returns Nothing if the lists have different lengths. So, if the names m and n match, and if the two arguments have the same number of subterms, then we pair those subterms off in order; otherwise, either the names or the lengths don't match, so we return Nothing.

Feature Structures

For the T example, we don't need to worry about the Left option. The reason it's there is to support feature structures and other sparse representations of terms. That is, consider the following type:

newtype FS k a = FS (Map k a)

Using this type, our logic terms are sets of key–subterm pairs. When unifying maps like these, what do we do if one argument has a binding for a particular key but the other argument does not? In the T example we assumed that subterms which couldn't be paired together (because the lists were different lengths) meant the unification must fail. But for FS it makes more sense to assume that terms which can't be paired up automatically succeed! That is, we'd like to assume that all the keys which are not explicitly present in the Map k a are implicitly present and each one is bound to a unique logic variable. Since the unique logic variables are implicit, there's no need to actually keep track of them, we'll just implicitly unify them with the subterm that can't be paired off.

This may make more sense if you see the Unifiable instance:

instance (Ord k) => Unifiable (FS k) where
    zipMatch (FS ls) (FS rs) =
        Just . FS $
            unionWith (\(Left l) (Left r) -> Right(l,r))
                (fmap Left ls)
                (fmap Left rs)

We start off by mapping Left over both the ls and the rs. We then call unionWith to pair things up. For any given key, if both ls and rs specify a subterm, then these subterms will be paired up as Right(l,r). If we have extra subterms from either ls or rs, however, then we keep them around as Left l or Left r. Thus, the Unifiable instance for FS performs a union of the FS structure, whereas the instance for T performs an intersection of T structure.

The Left option can be used in any situation where you can immediately resolve the unification of subterms, whereas the Right option says you still have work to do.1

Logic Variables

The library ships with two implementations of logic variables. The IntVar implementation uses Int as the names of variables, and uses an IntMap to keep track of the environment. The STVar implementation uses STRefs, so we can use actual mutation for binding logic variables, rather than keeping an explicit environment around. Of course, mutation complicates things, so the two implementations have different pros and cons.

Performing unification has the side effect of binding logic variables to terms. Thus, we'll want to use a monad in order to keep track of these effects. The BindingMonad type class provides the definition of what we need from our ambient monad. In particular, we need to be able to generate fresh logic variables, to bind logic variables, and to lookup what our logic variables are bound to. The library provides the necessary instances for both IntVar and STVar.

You can, of course, provide your own implementations of Variable and BindingMonad. However, doing so is beyond the scope of the current tutorial. For simplicity, we'll use the IntVar implementation below.

Example Programs

When embedding Prolog programs into Haskell, the main operators we want to consider are those in the section titled "Operations on two terms". These are structural equality (i.e., equality modulo substitution), structural equivalence (i.e., structural equality modulo alpha-variance), unification, and subsumption.

Consider the following Horn clause in Prolog:

example1(X,Y,Z) :- X = Y, Y = Z.

To implement this in Haskell we want a function which takes in three arguments, unifies the first two, and then unifies the second two. Thus,2

example1 x y z = do
    x =:= y
    y =:= z

To run this program we'd use one of the functions runIntBindingT, evalIntBindingT, or execIntBindingT, depending on whether we care about the binding state, the resulting logic term, or both. Of course, since the unifications may fail, we also need to run the underlying error monad, using something like runErrorT3,4. And since these are both monad transformers, we'll need to use runIdentity or the like in order to run the base monad. Thus, the functions to execute the entire monad stack will look like:

-- Aliases to simplify our type signatures. N.B., the
-- signatures are not actually required to get things
-- to typecheck.
type PrologTerm           = UTerm T IntVar 
type PrologFailure        = UnificationFailure T IntVar
type PrologBindingState   = IntBindingState T

-- N.B., the @FallibleBindingMonad@ isn't yet a monad
-- for Prolog because it doesn't support backtracking.
type FallibleBindingMonad =
    ErrorT PrologFailure (IntBindingT T Identity)

-- N.B., this definition runs into certain issues.
type PrologMonad =
    ErrorT PrologFailure (IntBindingT T Logic)

    :: FallibleBindingMonad a
    -> (Either PrologFailure a, PrologBindingState)
runFBM = runIdentity . runIntBindingT . runErrorT

Here are some more examples:

-- A helper function to reduce boilerplate. First we get
-- a free variable, then we embed it into @PrologTerm@,
-- and then we embed it into some error monad (for
-- capturing unification failures).
getFreeVar = lift (UVar <$> freeVar)

-- example2(X,Z) :- X = Y, Y = Z.
example2 x z = do
    y <- getFreeVar
    x =:= y
    y =:= z

-- example3(X,Z) :- example1(X,Y,Z).
example3 x z = do
    y <- getFreeVar
    example1 x y z

-- example4(X) :- X = bar; X = backtrack.
example4 x = (x =:= bar) <|> (x =:= atom "backtrack")

The complete code for this post can be found here online, or at ./test/tutorial/tutorial1.hs in the Darcs repo. Notably, there are some complications about the semantics of example4; it doesn't mean what you think it should mean. We'll tackle that problem and fix it later on in the tutorial series (in part 4 or thereabouts).

Term Factoring and Clause Resolution Automata (CRAs)

Note that for the above examples, the Haskell functions only execute the right-hand side of the Horn clause. In Prolog itself, there's also a process of searching through all the Horn clauses in a program and deciding which one to execute next. A naive way to implement that search process would be to have a list of all the Horn clauses and walk through it, trying to unify the goal with the left-hand side of each clause and executing the right-hand side if it matches. A more efficient way would be to compile all the right-hand sides into a single automaton, allowing us to match the goal against all the right-hand sides at once. (The idea here is similar to compiling a bunch of strings together into a trie or regex.)

Constructing optimal CRAs is NP-complete in general, though it's feasible if we have an arbitrary ordering of clauses (e.g., Prolog's top–down order for trying each clause). The unification-fd library does not implement any support for CRAs at present, though it's something I'd like to add in the future. For more information on this topic, see Dawson et al. (1995) Optimizing Clause Resolution: Beyond Unification Factoring and Dawson et al. (1996) Principles and Practice of Unification Factoring.

Other operators

In addition to unification itself, it's often helpful to have various other operators on hand.

One such operator is the subsumption operator. Whereas unification looks for a most-general substitution which when applied to both arguments yields terms which are structurally equal (i.e., l =:= r computes the most general s such that s l === s r), subsumption applies the substitution to only one side. That is, l subsumes r just in case r is a substitution instance of l (i.e., there exists a substitution s such that s l === r). The symbolic name (<:=) comes from the fact that when l subsumes r we also say that l is less defined5 than r. Subsumption shows up in cases where we have to hold r fixed for some reason, such as when implementing polymorphism or subtyping.

Other operators work on just one term, such as determining the free variables of a term, explicitly applying the ambient substitution to obtain a pure term, or cloning a term to make a copy where all the free variables have been renamed to fresh variables. These sorts of operators aren't used very often in logic programming itself, but are crucial for implementing logic programming languages.


Hopefully that gives a quick idea of how the library's API is set up. Next time I'll walk through an implementation of Hindley–Damas–Milner type inference, and then higher-ranked polymorphism à la Peyton Jones et al. (2011) Practical type inference for arbitrary-rank types. After that, I'll discuss the complications about backtracking choice I noticed when writing this post, and walk through how to fix them. If there's still interest after that, I can get into some of the guts of the library's implementation— like ranked path compression, maximal structure sharing, and so on.

If you have any particular questions you'd like me to address, drop me a line.

[1] Older versions of the library used the type zipMatch :: forall a b. t a -> t b -> Maybe (t (a,b)) in order to ensure that we did in fact properly pair up subterms from the two arguments. Unfortunately I had to relax that guarantee in order to add support for feature structures.

[2] N.B., a more efficient implementation is:

example1' x y z = do
    y' <- x =:= y
    y' =:= z

The unification operator returns a new term which guarantees maximal structure sharing with both of its arguments. The implementation of unification makes use of observable structure sharing, so by capturing y' and using it in lieu of y, the subsequent unifications can avoid redundant work.

[3] The ErrorT transformer was deprecated by transformers-, though it still works for this tutorial. Unfortunately, the preferred ExceptT does not work since UnificationFailure doesn't have a Monoid instance as of unification-fd-0.9.0. The definition of UnificationFailure already contains a hack to get it to work with ErrorT, but future versions of the library will remove that hack and will require users to specify their own monoid for combining errors. The First monoid captures the current behavior, though one may prefer to use other monoids such as a monoid that gives a trace of the full execution path, or witnesses for all the backtracks, etc.

[4] To be honest, I don't entirely recall why I had the error monad explicitly separated out as a monad transformer over the binding monad, rather than allowing these two layers to be combined. Since it's so awkward, I'm sure there was some reason behind it, I just failed to make note of why. If there turns out to be no decent reason for it, future versions of the library may remove this fine-grain distinction.

[5] The symbolic name for subsumption is chosen to reflect the meaning of more/less defined (rather than more/less grounded) so that the subsumption ordering coincides with the domain ordering (think of logic variables as being bottom). This is the standard direction for looking at subsumption; though, of course, we could always consider the dual ordering instead.

winterkoninkje: shadowcrane (clean) (Default)

The past few weeks I've been sorely addicted to Destiny, Bungie's new(ish) MMO-FPS. Every other MMO I've seen falls into the action-RPG genre. So, in being an FPS, Destiny has the potential to be a watershed moment for massively multiplayer games. In some ways, it lives up to these possibilities; but in some ways, it falls far short.

First Run

I'm a sucker for good storylines, that's why I love RPGs so much. So my formative experience with Destiny was running through the single-player plot levels, though I've probably spent more time on the post-game MMO aspects. I played the whole thing on hard/heroic, which will take you up to the soft level cap with a minimum of grinding. The initial setup for the world is pretty interesting. In particular, the idea of playing in a post-human universe is pretty cool, and offers a nice counterpoint to the transhuman universes which are starting to become popular in the tabletop RPG community. The presence of post-human technologies also helps some system details make sense in-game. For example, using reprogrammable matter as money makes it natural that you should be able to break items down into money without a vendor.

Many aspects of the universe are reminiscent of Bungie's other FPS franchises: Marathon and Halo. There's a Covenant-like race (individually strong, with shielded elites). There's a Flood-like race (though they're less zombie-like). There are warmind AIs. There are murderous robots (though less entertaining than the Monitors). Introducing a new race for each planet gets a bit stale, especially since every planet has two races fighting among themselves; but it's not too terrible.

The storyline itself is decent, though nothing amazing or unique. There are no major twists (e.g., the introduction of the Flood in the first Halo), but there are a bunch of minor ones to hold your interest and to keep things open for the expansions. The final boss fight is well done, both challenging and interesting. Though I'm not entirely convinced the Vex pose a greater threat than the other races; certainly not enough to make them the first Big Bad. It'd've made more sense to have had the first major villain be Fallen so the heroes can reclaim a chunk of Earth, and then make the Vex the main villain of an expansion.

An FPS and an MMO

As a hybridization of the FPS genre and the action-RPG MMO genre, Destiny works surprisingly well. You can definitely notice the difference from leveling up or fighting higher level enemies. Just as in other MMOs there's the exponential power curve which makes it impossible to confront enemies who are too many levels above you, and lets you ignore enemies too many levels below you— whether you like that sort of thing is up to you, but it definitely gives the MMO feel in ways I never would've expected of an FPS.

The upgrading of equipment fits the FPS genre surprisingly well. There are even some interesting tradeoffs. What is just a single gun in traditional FPSes (e.g., a hand cannon or a sniper rifle) is a class of weapons in Destiny. And while all guns in a particular class behave basically the same way, they differ significantly in terms of recoil, rate of fire, magazine size, etc. Thus, whenever you pick up a more powerful gun, there's always the question of whether the extra damage is worth the change in how it feels. In my first run through I often stuck with an old gun that felt better, only upgrading when it was a few levels behind or when the new gun had a similar or better feel.

Destiny has six character classes: three of which you can choose from the outset, the other three being subclasses available later on. Unlike traditional MMOs the classes don't really fall into the standard team roles of tank, dps, healer, buff/debuff, etc. Like an FPS, everyone on your team is just someone on your team. But each of the classes does come with different types of grenades and different special powers. So the classes play rather differently, even if they don't contribute to a team dynamic in the way you're used to from MMOs.

Once you reach the soft level cap (i.e., as far as XP will take you), you unlock all the various ways to play the game as a true MMO: daily and weekly challenges, cooperative strikes, raids, etc. From there you work towards the hard level cap (i.e., as far as equipment will take you). Either reaching the soft level cap or beating the single-player storyline (I'm not sure which) also unlocks higher level hard/heroic versions of the storyline levels. All in all, the MMO content is as addictive and entertaining as other MMOs. However, there's not a lot of content there. After a few weeks of running the same half-dozen strikes over and over, you know them inside and out. Perhaps this is just because Destiny is so new, whereas other MMOs I've played had been around much longer and so had more time to accrete expansions. Only time will tell.

The inevitable comparison

Let's get this out of the way: I don't like FPS games. The only FPS I've really enjoyed is Halo, another Bungie offering. What I loved about Halo —the first one especially— is the way that it revolutionized the FPS genre. Instead of being a walking arsenal carrying fifty different guns and going through a complex menu to switch between them, Halo came up with a brilliant innovation: you have a gun and another gun, that's it. Instead of classifying grenades as a type of gun (going through that same complex menu to switch to them and back, rendering them useless in most games), Halo considered grenades integral and gave them their own trigger. Not to mention the vehicles: (a) they had them, (b) they had unique and interesting offerings like the aerial Banshee and the multiplayer Warthog.

In Destiny, rather than two gun slots there are three, but each class of weapons is restricted to fitting only one of them. This can be annoying —in the beginning I'd've loved to have used a hand cannon as a secondary weapon— but it's still simple enough that you can switch freely in the middle of a firefight. However, Destiny nerfs many of the traditional FPS guns.

Shotguns have too wide of a choke to get all the pellets into one target, and not enough pellets to be effective against multiple targets. Also, per FPS tradition, they're only usable as melee weapons— something Halo nicely got away from by making them short-range rather than melee-range weapons. But that's fine, I just won't use shotguns, whatevs, no big loss.

Sniper rifles are an all-time favorite of mine, I'm much more interested in getting a headshot from a mile away than in riddling people with bullets up close. The sniper rifle fills that role, but there are very few scenes with enough distance to make it worthwhile. Besides, there's the scout rifle: a primary weapon which acts as a light sniper rifle. For all the really excellent sniping scenes, a scout rifle has enough range to make the shot if you have the aim. So sniper rifles are a bust (leaving the fusion rifle as far and away the best option for the secondary weapon slot), though this is heartily made up for by the presence of the scout rifle, which operates much like the beloved pistol in the first Halo.

But the most annoying is how they nerfed rocket launchers. Traditionally rocket launchers have a handful of uses: (i) to thin a clump of enemies à la grenades, (ii) to take out vehicles, (iii) to take out or severely wound strong enemies, (iv) PvP. The first use is generally thought of as a waste, given that we have grenades. (But Destiny's rocket launchers do fill that role nicely.) And I don't like PvP, which is no doubt a major component of why I typically dislike FPSes. (Though Destiny's rocket launchers also fill the PvP role nicely.) So that leaves the main roles: vehicles and strong enemies. Alas, there are only two spots where you encounter enemy vehicles: a few Ghost-like speeders in the middle of the first moon mission, which you can just avoid by using your own (gunless) speeder to get to the next section; and a tank miniboss in one of the strikes. Moreover, against any of the major bosses, you can get far better DPS with the scout rifle (and no doubt any of the other primary weapons) or a fusion rifle (a secondary weapon), making the rocket launcher (one of the two heavy weapons) a mildly entertaining waste of time on bosses. I got a legendary rocket launcher in a drop fairly early on so I've used it a fair deal and discovered one or two niche uses (against clusters of Phalanxes, and as a well-targeted grenade against drop ships), but otherwise I stick to my (non-legendary) LMGs.

Nestled in that diatribe against the rocket launcher is what I think is a greater loss compared to the Halo franchise: the lack of vehicles. After playing through the first Halo which had a number of excellently crafted vehicle levels, and the second Halo which had vehicles all over the place, the relegation of vehicles to a few throwaway moments in Destiny feels like a major step back. Destiny deserves to be far greater than an MMO Halo. The Halo franchise is well and thoroughly played out, and players want something new and different. But nerfing the classic FPS weapons and all-but eliminating vehicles doesn't seem necessary just to distance the Destiny franchise from Halo.

Missed Opportunities

It'd be cool if the reprogrammable matter ("glimmer") idea was played up a bit. In the early game, glimmer can be useful for buying up your equipment if you're not getting good drops. But by the mid-game, there's nothing worthwhile for sale, and you have more than enough glimmer to meet your needs for leveling up equipment. By the end-game it's easy to hit the glimmer cap, especially if you've been using the items that make all enemies you kill drop a bit of glimmer. But the idea of glimmer could be so much more interesting than just money. Imagine:

Instead of relying on ammo drops at all, you could fabricate all your ammo, making some weapons more economical than others (in practice instead of just in the flavor text). Ammo would take a while to fabricate, but you could have a small pre-fabricated stockpile on hand, much as the current/standard ammo system. Tuned to be fabricated at the right rate, this would make it so you either need to take some downtime between stages or else have to buy ammo off others.

Rather than having a long recharge cycle for grenades, you could treat them more like the ammo suggested above. Instead of having only one at a time and saving it up for the right moment, at the end of the charge cycle you've fabricated a grenade which you can keep in a small stockpile. This way you can use a few at a time, or use them more frequently, but at a cost. You could even introduce new grenade types which recharge faster but cost more, recharge faster but are weaker, and so on.

Instead of buying or finding equipment, you could buy, find, or invent(!) blueprints which allow you to convert glimmer into equipment— as often as you like! With an exchange system for trading items with other players, this could make for some very interesting commercial models. When you've found or invented a new blueprint, do you sell people the blueprint itself, the rights to use the blueprint some number of times, or do you sell them the product? Some players could be dedicated to inventing new equipment, focusing on particular sorts of armor or weapons. Or, rather than inventing items wholesale, perhaps the art is in figuring out nice combinations of level-up features to put on your equipment.

Perhaps the fabrication units are too large or delicate to carry around with you, and so they have to remain on your jumpship. Now, instead of only changing the skin of your ship whenever you get bored with the old one, you could get new ships with different manufacturing capabilities.

The essential core of any MMO is, ultimately, the economy. There's only so much you can level up, and only so slow you can make leveling, so the way you indefinitely extend the length of a game is by adding in various farming tasks: whether that's collecting raw materials, converting those materials into items, or whatever. But all these farming jobs generate products, and there are only so many products of a given sort that a single player can use. Thus, it's imperative to have some sort of exchange system so players can buy and sell items with one another. By allowing trade between players, the costs of items will naturally adjust based on their scarcity. In addition, the auction house will usually take their cut, which helps to siphon currency out of the system.

Alas, Destiny lacks both a trading house and any sort of item fabrication system. Before the patches to make way for the first expansion, you could trade collected raw materials for Crucible points— which was great! It siphons the materials out of the system, it gives you a farming game, and it gives players like me who hate PvP a way to gain the Crucible points necessary for buying legendary equipment. Unfortunately, the first patch for the expansion (prior to the expansion itself) reversed this: you could only spend Crucible points to buy raw materials. By this point I'd gathered far more raw materials than I could ever use up —via opportunistic collection alone, not even by farming—, so there's no reason to ever buy them. And now I have this huge pile of raw materials I can neither use nor sell, yet I'm forced to PvP for Crucible points.

Another unfortunate departure from traditional MMOs is the reliance on voice chat instead of textual messaging. Technologically it makes sense: Destiny is only available on consoles, so the lack of a keyboard makes typing difficult. But women get a lot of shit for being gamers. And both the FPS and MMO communities are especially renowned for their misogynistic behavior. It'd be nice if I could make friends for going on raids and such without opening myself up for that sort of abuse. When I do strikes with a random team, I often end up making by far the most kills. Whenever a teammate falls, I do my best to rush in and revive them. I switch between crowd control, taking out vips, or maximizing dps, depending on what the team needs and where we're falling short. Traits like these are looked up to in (male) players, so I've managed to make a few friends on merit alone; but if people knew I'm I woman, how differently would they interpret these traits? Do I revive teammates because of my "motherly instinct" or because I'm "flirting"? Is my kill ratio "compensating" or because I "have no life"? Do I take on crowd control because I "can't handle" the boss? When I dps the boss instead of doing crowd control am I "focusing on the wrong thing"? When the inevitable death happens, is it "because girls can't play"? Men never have to deal with these accusations, but women are always vulnerable to them if we make our gender known. By relying on voice chat, Destiny makes it difficult for women to form the friendships necessary to make MMOs fun and to tackle the hardest challenges.

End Game

A few days ago the first expansion for Destiny came out. At first I was excited since I'd started to get bored by the repetition of the content from the first/main game. But then the patches started coming out. The first patch introduced a bug whereby precision kills no longer register. You still get the bonus damage and death animations, but you no longer get better drops from headshots. Hence, I went from getting almost exclusively blue/rare drops (my precision kill rate is over 60%), to only ever getting green/uncommon drops— which are utterly useless at my level, even for the raw materials you can break them down into. This is the same patch that reversed the Crucible points for raw materials trade; thus, since I will never get a legendary drop again, and I cannot sell raw materials to gain Crucible points, the only way to get legendary equipment is to grind away in PvP. Have I mentioned how I hate PvP?

In addition, with the official release of the first expansion on December 9, they introduced new currencies of Vanguard and Crucible "commendations" in addition to the Vanguard and Crucible "points". The old legendary equipment which only cost points went away, and now there's new (stronger) legendary equipment which costs both points and commendations. In short, buying legendary equipment went from being extremely time consuming (at least a week and a half of grinding per piece, since there's a cap on how many points you can earn per week) to being effectively impossible. I get that MMOs are all about grinding, but there's a big difference between making slow progress and making zero progress. As things are, I'm permanently stuck at level 27 because the only non-PvP way to improve my level is to go for raid drops— which requires level 28 or higher thanks to the exponential power curve. I might be able to handle the raid if I got an exotic weapon, but that requires beating an epic stage with an effective level requirement of 30 or so. Chicken, meet egg.

So yeah. I'm kinda done with it for now, but it was fun while it lasted. The hours I put into it were well worth the sticker price, but I don't think I'll be getting the expansions for now. Maybe I'll come back in a few years.

winterkoninkje: shadowcrane (clean) (Default)

For all you local folks, I'll be giving a talk about my dissertation on November 5th at 4:00–5:00 in Ballantine Hall 011. For those who've heard me give talks about it before, not much has changed since NLCS 2013. But the majority of current CL/NLP, PL, and logic folks haven't seen the talk, so do feel free to stop by.

Abstract: Many natural languages allow scrambling of constituents, or so-called "free word order". However, most syntactic formalisms are designed for English first and foremost. They assume that word order is rigidly fixed, and consequently these formalisms cannot handle languages like Latin, German, Russian, or Japanese. In this talk I introduce a new calculus —the chiastic lambda-calculus— which allows us to capture both the freedoms and the restrictions of constituent scrambling in Japanese. In addition to capturing these syntactic facts about free word order, the chiastic lambda-calculus also captures semantic issues that arise in Japanese verbal morphology. Moreover, chiastic lambda-calculus can be used to capture numerous non-linguistic phenomena, such as: justifying notational shorthands in category theory, providing a strong type theory for programming languages with keyword-arguments, and exploring metatheoretical issues around the duality between procedures and values.

Edit 2014.11.05: The slides from the talk are now up.

winterkoninkje: shadowcrane (clean) (Default)

October first marks the beginning of Domestic Violence Awareness Month. I don't have the emotional energy to write anything about it at the moment, but y'all should read this (Trigger warnings ahoy): Because If I Was Honest, Everything I Knew Would Explode.

winterkoninkje: shadowcrane (clean) (Default)

Judith Butler's incisive discussion of the public aftermath of the 9/11 terrorist attacks —overwhelming anti-intellectualism, self-censorship in a "you're with us or you're a terrorist" regime, refusal to seek understanding of the attacks, etc— also applies more broadly to other social issues under public deliberation/renegotiation. (To be more fully explicated in another post.) Mostly this chapter is about posing questions and questioning the "inevitability" of our interpretations and framing of events, rather than providing answers to those questions.

Themes and ideas:

  • US flag as ambiguous symbol of (a) solidarity with those lost in the attacks, vs (b) support for the US military campaign; thereby insinuating that these are one and the same, and that the former leads in a single stroke to the latter.
  • Disallowing the story we tell to begin earlier than the 9/11 attacks themselves, thereby predetermining the sorts of stories that can be told, and preventing any real answer to the question "why do they hate us so much?"
  • Shoring up the first-person perspective and, hence, the presumption of US supremacy and centrality. Any attempt to decenter the US being perceived as a component of the psychological wound of the attacks themselves. The "we're reaping what we've sown" response is just another way of asserting the centrality of the US. The refusal to acknowledge the UN and other supra-governmental bodies rooted in the fact that such acknowledgement would decenter the US.
  • The distinction between conditions and causes. And, hence, the distinction between explanation and exoneration. The need for moving beyond a framework of "justification" and "culpability".
  • "the failure to conceive of Muslim and Arab lives as lives." (Butler 2004: 12, emphasis hers) More generally, the unwillingness to show or see the faces of those we've killed. Facelessness of the "enemy".
  • "Our fear of understanding a point of view belies a deeper fear that we shall be taken up by it, find it is contagious, become infected in a morally perilous way by the thinking of the presumed enemy." (Butler 2004: 8)
  • "Dissent is quelled, in part, through threatening the speaking subject with an uninhabitable identification. Because it would be heinous to identify as treasonous, as a collaborator, one fails to speak, or one speaks in throttled ways, in order to sidestep the terrorizing identification that threatens to take hold. This strategy for quelling dissent and limiting the reach of critical debate happens not only through a series of shaming tactics which have a certain psychological terrorization as their effect, but they work as well by producing what will and will not count as a viable speaking subject and a reasonable opinion within the public domain." (Butler 2004: xix, emphasis mine)

winterkoninkje: shadowcrane (clean) (Default)

I talk about a number of sensitive topics: depression, cPTSD, child abuse, sexism. Every one of these are topics where you risk damaging or ruining your career simply by bringing them up. And, believe it or not, I'm actually a rather private person; I do not discuss my personal life as part of casual chit-chat, I do not "accidentally" share personal details, and even mundane goings-on I prefer not to mention except among friends. So why, then, do I speak about these topics? Why make myself uncomfortable and risk devastating financial repercussions? The reasons I speak haven't changed much over the years (though the way I think about those reasons has). And yet, I don't think I've ever given a clear explication of these reasons and what I hope to achieve by my words. It's time to correct that.

I speak as a form of active political resistance against the silence culture that pervades the US (and surely the rest of the world). I do not discuss my history as a form of confession, nor as a form of exhibitionism; I talk about my past in order to perform my politics. I've talked about silence culture before, and today I want to unpack that a bit more by distinguishing between two different forms that silence culture takes.

A Note on Terminology

Complex Post Traumatic Stress Disorder (cPTSD) is a variant of PTSD that arises from long-term exposure to trauma. Simple PTSD results from acute trauma —e.g., a single violent encounter, or short-term exposure to warfare—, whereas complex PTSD results from repeated exposure to trauma over an extended period —e.g., childhood abuse where the child lives with the abuser, or living in a war-torn region. The complexity of cPTSD stems both from the extended/repeated nature of the trauma, and from the pervasive/inescapable nature of the traumatic environment. Together these undermine any sense of self-worth, personal safety, or ability to trust others. (cf., cPTSD Light, Out of the Fog, EMDR-web)

In the past I've always talked about "PTSD", because "cPTSD" did not exist when I was diagnosed. However, I'm a text-book example for cPTSD— both symptomatically and etiologically. Consequently, henceforth I will refer to my experiences as cPTSD, both for clarity/specificity and to raise awareness of the distinction. As if it needed stating: I still stand with those who live with "simple" PTSD, and will continue to advocate for awareness of and sensitivity to their experiences.

Internal Silence Culture

The internal culture of silence is a specific form of psychological violence used to sustain abusive environments. This is the kind of culture that says, "what happens in the family, stays in the family." This apophthegm shows up in alcoholic families, in families with mental illness1, in sexually abusive families, in intimate partner violence, in sexist and racist work environments, in acquaintance rape, in fraternity gang rape, and countless other abusive social groups. In all these situations, enculturated victims teach the new generation of victims that the outside world is dangerous, that they cannot trust it because it will betray them ("unlike the family"), that talking about the physical, sexual, and psychological violence will delegitimize and harm the one who speaks out, that talking about this violence will "embarrass" the family or other social unit, and that the damage of this loss of face will be greater than the damage from the violence itself. Sometimes abusers also teach this message, either by presenting themselves as "the real victim" or by threatening increased violence should the victim go public. But the culture of silence cannot be maintained without the complicity of enculturated victims.

The discussions and disclosures in my youth had the goal of disrupting this internal silence. By speaking out about the specifics of my own trauma, disclosing who enacted which forms of violence, and how they were themselves victims of violence, I took control of my life away from my abusive family. By speaking openly, I ensure that others in similar situations know they are not alone, and know that they too can break their silence. By naming the demon, we can tame it. By disclosing generations of abuse we make ourselves open to the possibility of breaking that chain instead of repeating the cycle once more with our own children.

Most of these discussions of my personal history happened years ago. I don't know how much they can be traced back to me now, since the bulk of them happened in the mid- and late 1990s when anonymity and pseudonymity were de rigueur. But I don't care. The only thing I regret is my youthful naivete and whatever ways I hurt people along the way— including my previous inability to see my abusers as fully human, as simply reenacting the abuse they themselves were raised with2. Though I do not give the specific details anymore, I am fine with these details being part of the public record. I am not —and will not be— embarrassed by my history. The past is a mere collection of facts; it cannot compel my silence. By neutralizing my own embarrassment, I know that at least on a personal level my politics have been successful. The more I can help others to neutralize their embarrassment and guilt about speaking out, the better. And I am not above enumerating the details once more if that is what's required.

External Silence Culture

More recently I've switched away from talking about my history with abuse, and towards talking about how living with depression and cPTSD affects my daily life. This is a shift in focus, not intent. I used to only be aware of internal silence, and tailored my activism accordingly; but now I realize there is another form of silence, one which is much less discussed and therefore more in need of awareness.

The external culture of silence is the systemic means by which a community fails to recognize (or refuses to admit) that it has a problem. It is the silence which allows each compagnie to realize and internally discuss the problem, while the community at large remains ignorant. It is how failure of communication prevents local/specific knowledge from becoming global/universal knowledge.

External silence culture often takes the form of superficial acknowledgement of a problem while perpetuating it via ignorance or misunderstanding of its root causes. A prime example is the way certain men acknowledge that sexism is a problem, but then turn around and enact the same sexist culture they decry in their blog posts. Another is how we can admit that, in the US, one-in-11 people are affected by some form of depression and one-in-24 people live with major depressive disorder (CDC), but then on an individual level most americans refuse to believe they know anyone with depression. Or how we can admit that depression costs US employers 44 billion dollars per annum in lost productive time, but then we turn around and stigmatize people for seeking treatment.

I do not believe this failure to recognize how our communities perpetuate the problems they nominally oppose can be brushed off as "mere" hypocrisy. There's a deeper issue than logical inconsistency going on here. Systemic failures can never be explained by idiosyncratic failures. This failure of communication is due to enculturated systems of silence and to a failure to appreciate how global interactions are constructed through local, personal interactions. As a species, we do not understand how to interpret facts like "one-in-N people"; we can believe these facts to be true without recognizing how that truth must be immanent in the personal microcosm of our daily surroundings.2.5 This mode of cognitive failure is perfectly natural, which is why we must take explicit steps to disrupt and forestall it.

This is why I talk about the fact that I live with depression and cPTSD. By making this public knowledge, in refuting the sanist assumption, I deny people the ability to believe they don't know anyone with these conditions. I don't say "there are lots of people with depression", I say "I am a person with depression", I say "you know someone who has attempted suicide". I make it personal. The logic here is the same as the logic behind Out & Proud. By making it personal, by going on with our daily lives and yet not allowing these lives to be masked over by normative assumptions, we help those around us to realize that we are everywhere, that we have always been here, and that we are already part of that thing they call "normal".

One of the ways silence is encultured is in all the things we don't feel the need to say when among "our own kind". With people like ourselves there is less need to discuss certain details; they already know, and so we can speak in analogies, allusions, and assumptions. But when we take this dialect of implicit communication into larger communities, our words are misunderstood and misconstrued because the audience fails to hear the content in what we do not-say and presumes the content of what they expect us to not-say. The problem here is not that we communicate meaning through the not-saying of particular things— the use of implicit communication typically increases the efficiency of communication, and it's an essential component of how groups foster intimacy. The problem is being unaware of the fact that there are many dialects of subtle communication, and being uncritical of our assumption that the audience knows which dialect we are using and understands how it should be interpreted. Subtle communication is why satire is such an effective shibboleth (Arthur Chu), and why George W. Bush can give speeches which are interpreted radically differently by evangelical Christians and the rest of the country (Craig Unger, William Safire, inter alia).

This is why I talk about what it is like to live with depression and cPTSD. By discussing their specific, ongoing influence on the ways I interact with the world, I'm explaining one of those dialects of assumed experience. Just knowing that I have depression is not enough to understand what it means, how it affects who I am and what I do. We need explanations like the spoon theory, depression quest, mainichi. Before the spoon theory was vocalized in 2003 people with chronic conditions would try to explain how our lives are just plain harder, but non-spoonies misinterpreted our words as "laziness" or "whininess". That references to spoons are so ubiquitous now, is because of the specific ways that explanation serves to make this aspect of life real to those who do not live it.

Beyond "illness", "cures", and the eventive mythology of trauma

In cognitive science there's the concept of umwelt3: an organism's lived experience of their surroundings. To understand why animals behave in certain ways it is not sufficient to know about their environment: the animal does not make decisions based on our perception of their environment, they make decisions based on their own observations and internal conceptions of these surroundings. The senses at our disposal are not like tools in a drawer that we can pick up or discard at a whim. Our senses are fundamentally ingrained in the fabric of reality: reality cannot exist to us without our perceiving it, and we cannot perceive any aspects of reality which lay outside our sensory capabilities. There is no unified "reality", at least not in any way that's helpful to daily life. Each of us inhabits different worlds, and each of these worlds is structured by our personal and specific perceptive, physical, cognitive, and social capabilities.

When people hear "PTSD" they think of veterans waking from nightmares, as if the condition is a punctual event with the effects of rewatching a scary movie. People don't realize that when we say we are "haunted" by our pasts, we mean it in the full gothic sense of the word. Our pasts, our memories are a demonic possession. They cast a pall over interactions, corrode friendships, corrupt vision with spectral intruder-attackers, poison socialization with projected phantasmagoric intentions. The haunting leeches into every waking moment, draining their color —except for harsh glaring garish splotches—, and when all those waking moments have been polluted it seeps into the dreamworld, hungry for more. The severity of the haunting varies, of course. And the possession sometimes goes into remission, hiding away to strike again later. And some of us aren't plagued by revenants or sluagh, we have poltergeists: spiritual agents that move things when we aren't looking, that are mostly quiet but always directing us down their crooked paths, with loud bangs and broken furniture if need be.

PTSD —like depression, and like anxiety— is not an event, it's a mode of living. Although we must talk about the events which gave rise to these conditions (to end our internalized silence), doing so allows our society to continue believing in and perpetuating the mythology that trauma causes an event of suffering rather than causing a state of suffering. This mythology includes the belief that mental "illness" can be "cured", that it is a transient condition without long-term repercussions. While acute depression4 is short-term, I'm not so sure it should be considered transient: 80% of people who've had a depressive episode go on to have another (Fava, Park, & Sonino). Contrarily, major depressive disorder —aka "depression"— is, by definition, an ongoing condition. Similarly, while simple PTSD can also be short-term5, the existence of complex PTSD suggests that simple PTSD is not transient. Conditions like cPTSD, depression, and anxiety are not something that goes away. We need to accept that fact and talk about them accordingly. We need to teach people how to live productive and fulfilling lives, not teach them to pin hopes on a "cure". Pretending it will go away causes disappointment, guilt, and embarrassment about not having "gotten over" it yet. And pretending it'll go away contributes to the continued marginalization of people who live with these chronic conditions. Pretending that depression, anxiety, and cPTSD are "illnesses" that will "go away" causes real and lasting psychological damage; and it does so in the exact same way as pretending child abuse will "go away" if we don't talk about it, and that pretending one's sexual orientation is an "illness" or that it might "go away" if ignored.

By refocusing my speech on what it's like to live with depression and cPTSD, one goal is to combat this mythology of a "disease" with a "cure", but the bigger goal is to inject some humanity into the public conception of people with these conditions. The spoon theory works because snatching spoons out of someone's hand creates a visceral reaction, which induces a personal experience of the claim that "life is harder", and thus causes one to view people with chronic conditions as expressing authentic human responses to living in a particular kind of world— even when that world isn't the one the snatchee is used to inhabiting. When I speak of my paranoia about releasing writings into the world, or the fear that masculinized cultures induce in women, or the panic I experience whenever I have the presumption to share deep personal opinions, or the rage I feel at people dismissing the monumental pain of living, my goal in all these disclosures is to help people see —for once— that I am human. That my responses are authentic, that they are responses shared by countless others who live in worlds like the one I inhabit. We are not diagnoses, we are not labels, we are not identities; we are human. We do not feel the need to describe our umwelt to one another, but you know not how to read the meaning we utter in the gaps between words. And this is why I speak, if you can but hear.


While editing the above post, I became aware of an abusive campaign against Zoë Quinn (the creator of Depression Quest) orchestrated by a malicious ex. This sort of invasive, dehumanizing, personalized attack is not at all uncommon— for women. If you are a man, you cannot comprehend what this is like because it is a form of gendered violence tailored exclusively to women. This is part of our umwelt. As women, our realities are shaped by the ever-present threat that our sexualities will be weaponized against us.

This highlights a very different reason I am open about my past. By being open about these things, I ensure that they cannot be used against me in the sort of "revelatory" smear campaign Quinn is facing. Make no mistake, my sexuality and psychology will be used against me. Openly handing men the ammunition only prevents them from engaging in this one specific form of violence. While I prefer to think of my openness as active political resistance, the fact is my openness is also a form of triage. A way of defending against the worst of the misogyny I must face for daring to exist. Perhaps I will unpack this motivation a bit more in another post.

[1] I really hate the term "mental illness". It's medicalizing and pathologizing. When people find out I wear glasses they barely even notice; why should finding out whether I take meds be any different? My height also has significant health repercussions, but noone talks about my "vertical illness". My brain is no more magical than my eyes or my spine. Unfortunately, I don't know of a suitable alternative to using the term MI. If you know of one, please do drop me a line.

[2] The surviving members of my family have matured significantly since my childhood, and it is not my intention to malign the people they have become by discussing the harm they inflicted on me throughout my childhood.

[2.5] Even without getting into cognitive issues, linguistic issues already forestall our ability to comprehend what statistics like these mean. Mark Liberman has written a number of excellent articles at Language Log about how, at present, natural language fails to adequately capture or convey probabilistic information and how this causes undue confusion between frequencies, rates, risk, odds, likelihoods, ratios of any of these, and various other probabilistic and statistical measurements. Here's one on how it's misleading to report odds ratios, counterbalanced by this piece on why (log) odds ratios are useful and have a cognitive basis.

[3] I refer here to the variation that occurs in embodied cognitive science and ecological psychology, rather than the variation that occurs in semiotics per se. So more like Gibson and less like Uexküll.

[4] The form of depression brought on by a novel traumatic event, like the loss of a loved one.

[5] EMDR-web reports being able to relieve 9/11 survivors of their simple PTSD using Eye Movement Desensitization and Reprocessing (EMDR). Prior to writing this article I was unaware of any reliable treatment for successfully treating PTSD, and I'm still not familiar with all the literature here.


winterkoninkje: shadowcrane (clean) (Default)
wren gayle romano

October 2015

    1 23

Common Tags