winterkoninkje: shadowcrane (clean) (Default)

A while back I released the unification-fd library, which gives a generic implementation of first-order unification of non-cyclic terms. I've given a few talks on how the library is implemented and what optimizations it performs, but that's not the topic for today. Today, I'm going to talk about how to use it.

Unification is a widely useful operation and, consequently, comes in many different flavors. The version currently supported by the library is the sort used by logic programming languages like Prolog, Curry, Dyna, and MiniKanren; which is the same sort that's used for unification-based type inference algorithms like Hindley–Damas–Milner. Of these two examples, the logic programming example is the simpler one to discuss— at least for folks who've used a language like Prolog before. So let's start from there.

Caveat Emptor: This post is something of a stream of consciousness. I've gotten a few requests for tutorials on how to use the library, but the requests weren't terribly specific about what problems people've had or what's been difficult to figure out. So I'm shooting in the dark as far as what folks need and how much background they have. I'm going to assume you're familiar with Prolog and the basics of what unification is and does.

Preemptive apology: I started writing this post months and months (and months) ago, but unintentionally dropped it after running into a certain issue and then getting distracted and moving onto other things. Actually, this happened at least twice. I'm terribly sorry about that. So, apologies for not tackling the disjunction issue in this post. I'll come back to it later, but figured this post really needs to get out the door already.

Logic Terms

A term, in Prolog, is just a fancy name for a value of some algebraic data type. In most variants of Prolog there's no explicit definition of the ADT, no restriction on what the constructors are, and no type checking to ensure that subterms have a particular shape. That is, Prolog is what's called a single-sorted logic; in other words, Prolog is an untyped/unityped language. With unification-fd we can implement multi-sorted (aka typed) logics, but for this tutorial we're going to stick with Prolog's single-sorted approach.

Opening up Control.Unification we'll see a handful of types and type classes, followed by a bunch of operators. The UTerm data type captures the recursive structure of logic terms. (UTerm is the free monad, if you're familiar with that terminology.) That is, given some functor t which describes the constructors of our logic terms, and some type v which describes our logic variables, the type UTerm t v is the type of logic terms: trees with multiple layers of t structure and leaves of type v. For our single-sorted logic, here's an implementation of t:

data T a = T String [a]

The String gives the name of the term constructor, and the list gives the ordered sequence of subterms. Thus, the Prolog term foo(bar,baz(X)) would be implemented as UTerm$T "foo" [UTerm$T "bar" [], UTerm$T "baz" [UVar x]]. If we're going to be building these terms directly, then we probably want to define some smart constructors to reduce the syntactic noise:

foo x y = UTerm$T "foo" [x,y]
bar     = UTerm$T "bar" []
baz x   = UTerm$T "baz" [x]

Now, we can implement the Prolog term as foo bar (baz x). If you prefer a more Prolog-like syntax, you can use uncurried definitions for smart constructors that take more than one argument.

Unifiable

In order to use our T data type with the rest of the API, we'll need to give a Unifiable instance for it. Before we do that we'll have to give Functor, Foldable, and Traversable instances. These are straightforward and can be automatically derived with the appropriate language pragmas.

The Unifiable class gives one step of the unification process. Just as we only need to specify one level of the ADT (i.e., T) and then we can use the library's UTerm to generate the recursive ADT, so too we only need to specify one level of the unification (i.e., zipMatch) and then we can use the library's operators to perform the recursive unification, subsumption, etc.

The zipMatch function takes two arguments of type t a. The abstract t will be our concrete T type. The abstract a is polymorphic, which ensures that we can't mess around with more than one level of the term at once. If we abandon that guarantee, then you can think of it as if a is UTerm T v. Thus,t a means T (UTerm T v); and T (UTerm T v) is essentially the type UTerm T v with the added guarantee that the values aren't in fact variables. Thus, the arguments to zipMatch are non-variable terms.

The zipMatch method has the rather complicated return type: Maybe (t (Either a (a,a))). Let's unpack this a bit by thinking about how unification works. When we try to unify two terms, first we look at their head constructors. If the constructors are different, then the terms aren't unifiable, so we return Nothing to indicate that unification has failed. Otherwise, the constructors match, so we have to recursively unify their subterms. Since the T structures of the two terms match, we can return Just t0 where t0 has the same T structure as both input terms. Where we still have to recursively unify subterms, we fill t0 with Right(l,r) values where l is a subterm of the left argument to zipMatch and r is the corresponding subterm of the right argument. Thus, zipMatch is a generalized zipping function for combining the shared structure and pairing up substructures. And now, the implementation:

instance Unifiable T where
    zipMatch (T m ls) (T n rs)
        | m /= n    = Nothing
        | otherwise =
            T n <$> pairWith (\l r -> Right(l,r)) ls rs

Where list-extras:Data.List.Extras.Pair.pairWith is a version of zip which returns Nothing if the lists have different lengths. So, if the names m and n match, and if the two arguments have the same number of subterms, then we pair those subterms off in order; otherwise, either the names or the lengths don't match, so we return Nothing.

Feature Structures

For the T example, we don't need to worry about the Left option. The reason it's there is to support feature structures and other sparse representations of terms. That is, consider the following type:

newtype FS k a = FS (Map k a)

Using this type, our logic terms are sets of key–subterm pairs. When unifying maps like these, what do we do if one argument has a binding for a particular key but the other argument does not? In the T example we assumed that subterms which couldn't be paired together (because the lists were different lengths) meant the unification must fail. But for FS it makes more sense to assume that terms which can't be paired up automatically succeed! That is, we'd like to assume that all the keys which are not explicitly present in the Map k a are implicitly present and each one is bound to a unique logic variable. Since the unique logic variables are implicit, there's no need to actually keep track of them, we'll just implicitly unify them with the subterm that can't be paired off.

This may make more sense if you see the Unifiable instance:

instance (Ord k) => Unifiable (FS k) where
    zipMatch (FS ls) (FS rs) =
        Just . FS $
            unionWith (\(Left l) (Left r) -> Right(l,r))
                (fmap Left ls)
                (fmap Left rs)

We start off by mapping Left over both the ls and the rs. We then call unionWith to pair things up. For any given key, if both ls and rs specify a subterm, then these subterms will be paired up as Right(l,r). If we have extra subterms from either ls or rs, however, then we keep them around as Left l or Left r. Thus, the Unifiable instance for FS performs a union of the FS structure, whereas the instance for T performs an intersection of T structure.

The Left option can be used in any situation where you can immediately resolve the unification of subterms, whereas the Right option says you still have work to do.1

Logic Variables

The library ships with two implementations of logic variables. The IntVar implementation uses Int as the names of variables, and uses an IntMap to keep track of the environment. The STVar implementation uses STRefs, so we can use actual mutation for binding logic variables, rather than keeping an explicit environment around. Of course, mutation complicates things, so the two implementations have different pros and cons.

Performing unification has the side effect of binding logic variables to terms. Thus, we'll want to use a monad in order to keep track of these effects. The BindingMonad type class provides the definition of what we need from our ambient monad. In particular, we need to be able to generate fresh logic variables, to bind logic variables, and to lookup what our logic variables are bound to. The library provides the necessary instances for both IntVar and STVar.

You can, of course, provide your own implementations of Variable and BindingMonad. However, doing so is beyond the scope of the current tutorial. For simplicity, we'll use the IntVar implementation below.

Example Programs

When embedding Prolog programs into Haskell, the main operators we want to consider are those in the section titled "Operations on two terms". These are structural equality (i.e., equality modulo substitution), structural equivalence (i.e., structural equality modulo alpha-variance), unification, and subsumption.

Consider the following Horn clause in Prolog:

example1(X,Y,Z) :- X = Y, Y = Z.

To implement this in Haskell we want a function which takes in three arguments, unifies the first two, and then unifies the second two. Thus,2

example1 x y z = do
    x =:= y
    y =:= z

To run this program we'd use one of the functions runIntBindingT, evalIntBindingT, or execIntBindingT, depending on whether we care about the binding state, the resulting logic term, or both. Of course, since the unifications may fail, we also need to run the underlying error monad, using something like runErrorT3,4. And since these are both monad transformers, we'll need to use runIdentity or the like in order to run the base monad. Thus, the functions to execute the entire monad stack will look like:

-- Aliases to simplify our type signatures. N.B., the
-- signatures are not actually required to get things
-- to typecheck.
type PrologTerm           = UTerm T IntVar 
type PrologFailure        = UnificationFailure T IntVar
type PrologBindingState   = IntBindingState T

-- N.B., the @FallibleBindingMonad@ isn't yet a monad
-- for Prolog because it doesn't support backtracking.
type FallibleBindingMonad =
    ErrorT PrologFailure (IntBindingT T Identity)

-- N.B., this definition runs into certain issues.
type PrologMonad =
    ErrorT PrologFailure (IntBindingT T Logic)

runFBM
    :: FallibleBindingMonad a
    -> (Either PrologFailure a, PrologBindingState)
runFBM = runIdentity . runIntBindingT . runErrorT

Here are some more examples:

-- A helper function to reduce boilerplate. First we get
-- a free variable, then we embed it into @PrologTerm@,
-- and then we embed it into some error monad (for
-- capturing unification failures).
getFreeVar = lift (UVar <$> freeVar)

-- example2(X,Z) :- X = Y, Y = Z.
example2 x z = do
    y <- getFreeVar
    x =:= y
    y =:= z

-- example3(X,Z) :- example1(X,Y,Z).
example3 x z = do
    y <- getFreeVar
    example1 x y z

-- example4(X) :- X = bar; X = backtrack.
example4 x = (x =:= bar) <|> (x =:= atom "backtrack")

The complete code for this post can be found here online, or at ./test/tutorial/tutorial1.hs in the Darcs repo. Notably, there are some complications about the semantics of example4; it doesn't mean what you think it should mean. We'll tackle that problem and fix it later on in the tutorial series (in part 4 or thereabouts).

Term Factoring and Clause Resolution Automata (CRAs)

Note that for the above examples, the Haskell functions only execute the right-hand side of the Horn clause. In Prolog itself, there's also a process of searching through all the Horn clauses in a program and deciding which one to execute next. A naive way to implement that search process would be to have a list of all the Horn clauses and walk through it, trying to unify the goal with the left-hand side of each clause and executing the right-hand side if it matches. A more efficient way would be to compile all the right-hand sides into a single automaton, allowing us to match the goal against all the right-hand sides at once. (The idea here is similar to compiling a bunch of strings together into a trie or regex.)

Constructing optimal CRAs is NP-complete in general, though it's feasible if we have an arbitrary ordering of clauses (e.g., Prolog's top–down order for trying each clause). The unification-fd library does not implement any support for CRAs at present, though it's something I'd like to add in the future. For more information on this topic, see Dawson et al. (1995) Optimizing Clause Resolution: Beyond Unification Factoring and Dawson et al. (1996) Principles and Practice of Unification Factoring.

Other operators

In addition to unification itself, it's often helpful to have various other operators on hand.

One such operator is the subsumption operator. Whereas unification looks for a most-general substitution which when applied to both arguments yields terms which are structurally equal (i.e., l =:= r computes the most general s such that s l === s r), subsumption applies the substitution to only one side. That is, l subsumes r just in case r is a substitution instance of l (i.e., there exists a substitution s such that s l === r). The symbolic name (<:=) comes from the fact that when l subsumes r we also say that l is less defined5 than r. Subsumption shows up in cases where we have to hold r fixed for some reason, such as when implementing polymorphism or subtyping.

Other operators work on just one term, such as determining the free variables of a term, explicitly applying the ambient substitution to obtain a pure term, or cloning a term to make a copy where all the free variables have been renamed to fresh variables. These sorts of operators aren't used very often in logic programming itself, but are crucial for implementing logic programming languages.

Conclusion

Hopefully that gives a quick idea of how the library's API is set up. Next time I'll walk through an implementation of Hindley–Damas–Milner type inference, and then higher-ranked polymorphism à la Peyton Jones et al. (2011) Practical type inference for arbitrary-rank types. After that, I'll discuss the complications about backtracking choice I noticed when writing this post, and walk through how to fix them. If there's still interest after that, I can get into some of the guts of the library's implementation— like ranked path compression, maximal structure sharing, and so on.

If you have any particular questions you'd like me to address, drop me a line.


[1] Older versions of the library used the type zipMatch :: forall a b. t a -> t b -> Maybe (t (a,b)) in order to ensure that we did in fact properly pair up subterms from the two arguments. Unfortunately I had to relax that guarantee in order to add support for feature structures.

[2] N.B., a more efficient implementation is:

example1' x y z = do
    y' <- x =:= y
    y' =:= z

The unification operator returns a new term which guarantees maximal structure sharing with both of its arguments. The implementation of unification makes use of observable structure sharing, so by capturing y' and using it in lieu of y, the subsequent unifications can avoid redundant work.

[3] The ErrorT transformer was deprecated by transformers-0.4.1.0, though it still works for this tutorial. Unfortunately, the preferred ExceptT does not work since UnificationFailure doesn't have a Monoid instance as of unification-fd-0.9.0. The definition of UnificationFailure already contains a hack to get it to work with ErrorT, but future versions of the library will remove that hack and will require users to specify their own monoid for combining errors. The First monoid captures the current behavior, though one may prefer to use other monoids such as a monoid that gives a trace of the full execution path, or witnesses for all the backtracks, etc.

[4] To be honest, I don't entirely recall why I had the error monad explicitly separated out as a monad transformer over the binding monad, rather than allowing these two layers to be combined. Since it's so awkward, I'm sure there was some reason behind it, I just failed to make note of why. If there turns out to be no decent reason for it, future versions of the library may remove this fine-grain distinction.

[5] The symbolic name for subsumption is chosen to reflect the meaning of more/less defined (rather than more/less grounded) so that the subsumption ordering coincides with the domain ordering (think of logic variables as being bottom). This is the standard direction for looking at subsumption; though, of course, we could always consider the dual ordering instead.

winterkoninkje: shadowcrane (clean) (Default)

unification-fd 0.7.0

The unification-fd package offers generic functions for single-sorted first-order structural unification (think of programming in Prolog, or of the metavariables in type inference)[1][2]. The library is sufficient for implementing higher-rank type systems à la Peyton Jones, Vytiniotis, Weirich, Shields, but bear in mind that unification variables are the metavariables of type inference— not the type-variables.

An effort has been made to make the package as portable as possible. However, because it uses the ST monad and the mtl-2 package it can't be H98 nor H2010. However, it only uses the following common extensions which should be well supported[3]:

Rank2Types
MultiParamTypeClasses
FunctionalDependencies -- Alas, necessary for type inference
FlexibleContexts       -- Necessary for practical use of MPTCs
FlexibleInstances      -- Necessary for practical use of MPTCs
UndecidableInstances   -- For Show instances due to two-level types

Changes (since 0.6.0)

This release is another major API breaking release. Apologies, but things are a lot cleaner now and hopefully the API won't break again for a while. The biggest change is that the definition of terms has changed from the previous:

    data MutTerm v t
        = MutVar  !v
        | MutTerm !(t (MutTerm v t))

To the much nicer:

    data UTerm t v
        = UVar  !v
        | UTerm !(t (UTerm t v))

The old mnemonic of "mutable terms" was inherited from the code's previous life implementing a logic programming language; but when I was playing around with implementing a type checker I realized that the names don't really make sense outside of that original context. So the new mnemonic is "unification terms". In addition to being a bit shorter, it should help clarify the separation of concerns (e.g., between unification variables vs lambda-term variables, type variables, etc.).

The swapping of the type parameters is so that UTerm can have instances for Functor, Monad, etc. This change should've been made along with the re-kinding of variable types back in version 0.6.0, since the UTerm type is the free monad generated by t. I've provided all the category theoretic instances I could imagine some plausible reason for wanting. Since it's free, there are a bunch more I haven't implemented since they don't really make sense for structural terms (e.g., MonadTrans, MonadWriter, MonadReader, MonadState, MonadError, MonadCont). If you can come up with some compelling reason to want those instances, I can add them in the future.

Since the order of type parameters to BindingMonad, UnificationFailure, Rank, and RankedBindingMonad was based on analogy to the order for terms, I've also swapped the order in all of them for consistency.

I've removed the eqVar method of the Variable class, and instead added an Eq superclass constraint. Again, this should've happened with the re-kinding of variables back in version 0.6.0. A major benefit of this change is that now you can use all those library functions which require Eq (e.g., many of the set-theoretic operations on lists, like (\\) and elem).

I've added new functions: getFreeVarsAll, applyBindingsAll, freshenAll; which are like the versions without "All", except they're lifted to operate over Foldable/Traversable collections of terms. This is crucial for freshenAll because it allows you to retain sharing of variables among the collection of terms. Whereas it's merely an optimization for the others (saves time for getFreeVarsAll, saves space for applyBindingsAll).

The type of the seenAs function has also changed, to ensure that variables can only be seen as structure rather than as any UTerm.

Thanks to Roman Cheplyaka for suggesting many of these changes.

Description

The unification API is generic in the type of the structures being unified and in the implementation of unification variables, following the two-level types pearl of Sheard (2001). This style mixes well with Swierstra (2008), though an implementation of the latter is not included in this package.

That is, all you have to do is define the functor whose fixed-point is the recursive type you're interested in:

    -- The non-recursive structure of terms
    data S a = ...

    -- The recursive term type
    type PureTerm = Fix S

And then provide an instance for Unifiable, where zipMatch performs one level of equality testing for terms and returns the one-level spine filled with pairs of subterms to be recursively checked (or Nothing if this level doesn't match).

    class (Traversable t) => Unifiable t where
        zipMatch :: t a -> t b -> Maybe (t (a,b))

The choice of which variable implementation to use is defined by similarly simple classes Variable and BindingMonad. We store the variable bindings in a monad, for obvious reasons. In case it's not obvious, see Dijkstra et al. (2008) for benchmarks demonstrating the cost of naively applying bindings eagerly.

There are currently two implementations of variables provided: one based on STRefs, and another based on a state monad carrying an IntMap. The former has the benefit of O(1) access time, but the latter is plenty fast and has the benefit of supporting backtracking. Backtracking itself is provided by the logict package and is described in Kiselyov et al. (2005).

In addition to this modularity, unification-fd implements a number of optimizations over the algorithm presented in Sheard (2001)— which is also the algorithm presented in Cardelli (1987).

  • Their implementation uses path compression, which we retain. Though we modify the compression algorithm in order to make sharing observable.
  • In addition, we perform aggressive opportunistic observable sharing, a potentially novel method of introducing even more sharing than is provided by the monadic bindings. Basically, we make it so that we can use the observable sharing provided by the modified path compression as much as possible (without introducing any new variables).
  • And we remove the notoriously expensive occurs-check, replacing it with visited-sets (which detect cyclic terms more lazily and without the asymptotic overhead of the occurs-check). A variant of unification which retains the occurs-check is also provided, in case you really need to fail fast.
  • Finally, a highly experimental branch of the API performs weighted path compression, which is asymptotically optimal. Unfortunately, the current implementation is quite a bit uglier than the unweighted version, and I haven't had a chance to perform benchmarks to see how the constant factors compare. Hence moving it to an experimental branch.

These optimizations pass a test suite for detecting obvious errors. If you find any bugs, do be sure to let me know. Also, if you happen to have a test suite or benchmark suite for unification on hand, I'd love to get a copy.

Notes and limitations

[1] At present the library does not appear amenable for implementing higher-rank unification itself; i.e., for higher-ranked metavariables, or higher-ranked logic programming. To be fully general we'd have to abstract over which structural positions are co/contravariant, whether the unification variables should be predicative or impredicative, as well as the isomorphisms of moving quantifiers around. It's on my todo list, but it's certainly non-trivial. If you have any suggestions, feel free to contact me. [back]

[2] At present it is only suitable for single-sorted (aka untyped) unification, à la Prolog. In the future I aim to support multi-sorted (aka typed) unification, however doing so is complicated by the fact that it can lead to the loss of MGUs; so it will likely be offered as an alternative to the single-sorted variant, similar to how the weighted path-compression is currently offered as an alternative. [back]

[3] With the exception of fundeps which are notoriously difficult to implement. However, they are supported by Hugs and GHC 6.6, so I don't feel bad about requiring them. Once the API stabilizes a bit more I plan to release a unification-tf package which uses type families instead, for those who feel type families are easier to implement or use. There have been a couple requests for unification-tf, so I've bumped it up on my todo list. [back]

References

Luca Cardelli (1987)
Basic polymorphic typechecking. Science of Computer Programming, 8(2): 147–172.
Atze Dijkstra, Arie Middelkoop, S. Doaitse Swierstra (2008)
Efficient Functional Unification and Substitution. Technical Report UU-CS-2008-027, Utrecht University.
Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, Mark Shields (2007)
Practical type inference for arbitrary-rank types. JFP 17(1). The online version has some minor corrections/clarifications.
Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, and Amr Sabry (2005)
Backtracking, Interleaving, and Terminating Monad Transformers. ICFP.
Tim Sheard (2001)
Generic Unification via Two-Level Types and Paramterized Modules, Functional Pearl. ICFP.
Tim Sheard and Emir Pasalic (2004)
Two-Level Types and Parameterized Modules. JFP 14(5): 547–587. This is an expanded version of Sheard (2001) with new examples.
Wouter Swierstra (2008)
Data types à la carte, Functional Pearl. JFP 18: 423–436.

Links

winterkoninkje: shadowcrane (clean) (Default)

unification-fd 0.5.0

The unification-fd package offers generic functions for first-order structural unification (think Prolog programming or Hindley–Milner type inference). I've had this laying around for a few years, so I figured I might as well publish it.

An effort has been made to try to make this package as portable as possible. However, because it uses the ST monad and the mtl-2 package it can't be H98 nor H2010. However, it only uses the following common extensions which should be well supported[1]:

Rank2Types
MultiParamTypeClasses
FunctionalDependencies
FlexibleContexts
FlexibleInstances
UndecidableInstances

[1] With the exception of fundeps which are notoriously difficult to implement. However, they are supported by Hugs and GHC 6.6, so I don't feel bad about requiring it. Once the API stabilizes a bit more I plan to release a unification-tf package which uses type families instead, for those who feel type families are easier to implement or use.

Description

The unification API is generic in the type of the structures being unified and in the implementation of unification variables, following the two-level types pearl of Sheard (2001). This style mixes well with Swierstra (2008), though an implementation of the latter is not included in this package.

That is, all you have to do is define the functor whose fixed-point is the recursive type you're interested in:

    -- The non-recursive structure of terms
    data S a = ...
    
    -- The recursive term type
    type PureTerm = Fix S

And then provide an instance for Unifiable, where zipMatch performs one level of equality testing for terms and returns the one-level spine filled with pairs of subterms to be recursively checked (or Nothing if this level doesn't match).

    class (Traversable t) => Unifiable t where
        zipMatch :: t a -> t b -> Maybe (t (a,b))

The choice of which variable implementation to use is defined by similarly simple classes Variable and BindingMonad. We store the variable bindings in a monad, for obvious reasons. In case it's not obvious, see Dijkstra et al. (2008) for benchmarks demonstrating the cost of naively applying bindings eagerly.

There are currently two implementations of variables provided: one based on STRefs, and another based on a state monad carrying an IntMap. The former has the benefit of O(1) access time, but the latter is plenty fast and has the benefit of supporting backtracking. Backtracking itself is provided by the logict package and is described in Kiselyov et al. (2005).

In addition to this modularity, unification-fd implements a number of optimizations over the algorithm presented in Sheard (2001)— which is also the algorithm presented in Cardelli (1987).

  • Their implementation uses path compression, which we retain. Though we modify the compression algorithm in order to make sharing observable.
  • In addition, we perform aggressive opportunistic observable sharing, a potentially novel method of introducing even more sharing than is provided by the monadic bindings. Basically, we make it so that we can use the observable sharing provided by the previous optimization as much as possible (without introducing any new variables).
  • And we remove the notoriously expensive occurs-check, replacing it with visited-sets (which detect cyclic terms more lazily and without the asymptotic overhead of the occurs-check). A variant of unification which retains the occurs-check is also provided, in case you really need to fail fast for some reason.
  • Finally, a highly experimental branch of the API performs weighted path compression, which is asymptotically optimal. Unfortunately, the current implementation is quite a bit uglier than the unweighted version, and I haven't had a chance to perform benchmarks to see how the constant factors compare. Hence moving it to an experimental branch.

I haven't had a chance to fully debug these optimizations, though they pass some of the obvious tests. If you find any bugs, do be sure to let me know. Also, if you happen to have a test suite or benchmark suite for unification on hand, I'd love to get a copy.

References

Luca Cardelli (1987)
Basic polymorphic typechecking. Science of Computer Programming, 8(2): 147–172.
Atze Dijkstra, Arie Middelkoop, S. Doaitse Swierstra (2008)
Efficient Functional Unification and Substitution, Technical Report UU-CS-2008-027, Utrecht University.
Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, and Amr Sabry (2005)
Backtracking, Interleaving, and Terminating Monad Transformers, ICFP.
Tim Sheard (2001)
Generic Unification via Two-Level Types and Paramterized Modules, Functional Pearl, ICFP.
Tim Sheard and Emir Pasalic (2004)
Two-Level Types and Parameterized Modules. JFP 14(5): 547–587. This is an expanded version of Sheard (2001) with new examples.
Wouter Swierstra (2008)
Data types a la carte, Functional Pearl. JFP 18: 423–436.

Links

RSS Atom

April 2019

S M T W T F S
 123456
78910111213
14151617181920
212223242526 27
282930    

Tags

Page generated 15 Jun 2025 08:03 pm
Powered by Dreamwidth Studios