All this stuff is "well known", but I want to put it out there for folks who may not have encountered it, or not encountered it all together in one picture.

The Damas–Hindley–Milner type system (i.e., the type system that Algorithm W is inferring types for) is propositional logic extended with ** rank-1 second-order universal** quantifiers. It is interesting because it is so particularly stable with respect to inference, decidability, etc. That is, we can come up with many other algorithms besides Algorithm W and they enjoy nice properties like the fact that adding type signatures won't cause inference to fail. (It's worth noting, that Algorithm W is DEXPTIME-complete; so while in practice it's often linear time, for pathological inputs it can take exponentially long. However, if we put a constant bound on the depth of nested let-bindings, then the upper bound becomes polynomial.)

The extension of DHM with ** rank-1 second-order existential** quantifiers is strictly more powerful. It is interesting because it allows unrestricted use of both of the quantifiers in prenex position; thus, it is the limit/top of the alternating quantifier hierarchy (à la the arithmetical hierarchy) that starts with DHM. Surely there are other interesting properties here, but this system is understudied relative to the ones above and below.

*Edit:*Although GHC gets by with encoding existentials away, it's worth noting that MLF allows existentials where the unpacking is implicit rather than requiring an "unseal" or case eliminator (Leijen 2006); and also that UHC does in fact offer first-class existentials (Dijkstra 2005).

The extension with ** rank-2 second-order universals** (i.e., where the universal quantifier can appear to the left of one function arrow) is strictly more powerful still. Here we can encode rank-1 existentials, but my point in this whole post is to point out that rank-1 existentials themselves are strictly weaker than the rank-2 universals it takes to encode them! Also, one little-known fact: this type system is interesting because it is the last one in this progression where type inference is decidable (Kfoury & Wells 1993). The decidability of rank-2 universal quantification is part of the reason why GHC distinguishes between

`-XRank2Types`

vs `-XRankNTypes`

. Alas, although inference is decidable —and thus of mathematical interest— it is not decidable in the same robust way that DHM is. That is, if we care about human factors like good error messages or not breaking when the user adds type signatures, then we don't get those properties here. Still, the fact that this system is at the cusp of decidable inference is important to know. *Edit:*Also of interest, this system has the same typeable terms as simply-typed λ-calculus with

**, and the type inference problem here is fundamentally DEXPTIME-complete (Jim 1995).**

*rank-2 intersection types*Things keep alternating back and forth between existentials and universals of each rank; so far as I'm aware, none of these systems are of any particular interest until we hit the limit: ** rank-ω (aka: rank-N) second-order quantification**. This type system is often called "System F", but that's a misnomer. It is important to differentiate between the syntactic system (i.e., actual System F) we're inferring types for, vs the type system (aka: propositional logic with second-order quantifiers) in which the inferred types live. That is, we can perfectly well have a syntactic system which doesn't have explicit type abstractions/applications but for which we still ascribe rank-ω types. It so happens that the type inference problem is undecidable for that syntactic system, but it was already undecidable way back at rank-3 so the undecidability isn't particularly novel.

## Re: Rank2Types

Date: 2016-05-20 04:50 am (UTC)From:winterkoninkjeBecause of the issues about stability under extra annotations & decent error messages, if any compiler is going so far as to deal with rank-N types I'd say it's prolly good not to actually treat rank-2 specially (since this will be unintuitive to / unhelpful for users). It's more of a technical curiosity for folks who really care about fine-grained boundaries of power/provability, and are willing to sacrifice everything else to map out the cliff's edge. This used to be a big concern in the late 20th century, though I'm not sure how popular it is anymore. In CS circles at least we've kinda thrown all that to the wind and keep looking for more and more powerful systems (MLTT, CIC, IR, ...), willing to discard all but the most crucial forms of decidability (keeping only decidability of checking type-annotations, but even that is starting to show fault lines). Most pure logicians I know are similarly focused on other issues, whether they're following the CS folks or doing their own stuff. Don't hang out with enough philosophers to know what their current foci are.