So I got to thinking about monads for computation, in particular about the "no escape" clause. In practice you actually can escape from most monads (ST
, State
, List
, Logic
(for backtracking search), Maybe
, Cont
, even IO
) and these monads all have some "run" function for doing so (though it may be "unsafe"). These functions all have different signatures because there are different ways of catamorphing the computation away in arriving at your result, but in principle it's possible to subtype these computations into having appropriate generic execution functions.
The simplest computation is a constant computation which successfully returns a value, ST
and Identity
are just such computations. One way of extending this is to say that a computation is nondeterministic and can return many answers[1]. A different way of extending it is to say that the computation might fail to return any value, Maybe
is just such a monad. And of course we could take both those extensions and either fail to return any answers or return nondeterministically many answers, List
and Logic
are our examples here. Another extension is that a computation may have different kinds of failure, like (Either e)
, and this is where things start to get interesting.
To generalize what I've said so far we can talk about the topological space of a computation's result. For a computation m a a constant computation has a result in a (we can think of the type a as the alphabet of possible kinds of success). An infallible nondeterministic computation has a result in ak where k ranges over the positive natural numbers and indicates all the different ways we can succeed.
A fallible computation could be thought of as ak where k is either 0 or 1, and where the empty result signifies failure. A different way of thinking fallibility which doesn't rely on empty results is as e | a where e is our error type. From this perspective the difference between Maybe
and (Either e)
is that Maybe
has a unit type for failures, whereas (Either e)
has a vocabulary/alphabet of e kinds of failure. Simply-fallible nondeterministic computations like List have results in the space a(0|k) == () | ak. For nondeterministic computations which may have some variety of possible failures, their result is in the space e | ak for an e with cardinality greater than 1. (Infallible computations also have this form, but for an e of cardinality 0.)
To run a constant computation your function is of the type m a → a, easy as pie. For simply-fallible computations you need to have some default value to return on failure in order to make it safe, hence m a → a → a. For nondeterministic computations you need a way of combining successes, hence m a → (a → a → a) → a. And for simply-fallible nondeterministic computations you need both: m a → (a → a → a) → a → a (which should look suspiciously similar to fold, though we've abstracted away the mapping aspect of fold which allows us to inject the values of a into values of a space b). For computations which have a variety of failure modes, instead of a default value, we need a function from errors to default values.
But let's put aside the idea of escaping monads and go back to the more interesting track of the typology of result spaces for computations. From symmetry, one may wonder whether it's possible to have a computation which may fail nondeterministically many times, i.e. has results in the space ej | ak for some j larger than one. Certainly it's conceivable. In fact, one reasonable example of this shows up in things like compilers. A good compiler does not simply exit after finding one error but rather tries to continue as long as possible in order to return as many errors as it can. Since there are different varieties of errors, compilers have results in ej | a for some non-unit vocabulary of errors e and some vocabulary of compiled programs a (for compilers that batch separate compilation we may rather think of it as in the full ej | ak where each object file is a separate value in a).
With simply-fallible nondeterministic computations we noticed that a0 == ()1, one may wonder then how to think of many errors in a simply-fallible computation. I posit that one way of thinking of multiple unit failures is as aj for some negative natural number j. In some interesting areas of formal language theory and typology theory the idea of such negative length strings is explored in some depth, though unfortunately I'm not familiar enough with these extensions, but I'm intrigued by the connection if it does in fact hold. This whole thing is also somewhat evocative of imaginary and complex numbers, another area I'm insufficiently schooled in to make any astute observations.
One thing such theories allow is the intermingling of positive and negative letters (or real and imaginary parts). So far I've presented the space of computations as ej | ak but why do we need to have the disjunctive union there? Could a nondeterministic computation return both success and also failure? Usually when we think of nondeterminism we implicitly assume we're going to throw away all the errors, leaving zero successes as our only error result. There's no reason why we couldn't keep the errors as well as the successes, though there's some question about whether such a thing would be helpful as a particular model of computation.
Again I'd argue that such a thing may indeed be helpful. One of the core ideas in quantum mechanics is the notion that things can exist in superposition. Nondeterminism captures the idea of a superposition among many choices. The notorious example from quantum is that something may be in a superposition not only between different locations but also between existing and not existing (or living and not living for the sake of some poor felines). I'd posit that a computation which nondeterministically returns both success and also failure captures just such a situation. Again, I know too little about quantum to say much of import here, but I think the connection is real. With many kinds of failure (non-unit error type) and the ability to fail in many ways (non-unit exponent) we can capture an entire multidimensional space of nonexistence/failure which could go in superposition with all the kinds and ways of existence/success.
Fuzzy set theory and fractal decision boundaries (and hence all of chaos theory) are other areas which may benefit from the ability to have computations that succeed while failing. This in turn brings us back to the theory of computation. ej | ak obviously models the ability of a turing machine, but does ej ak have any extra formal power? That the latter model allows for contradictions (both succeeding and also failing at the same time) may seem to imply that it does, but of course our first-blush intuitions about formal power don't always match reality (e.g. nondeterminism only gives formal power at certain levels of the language/automata hierarchy, but not at other higher levels).
In all, an interesting thought experiment. It also makes clear to me that I need to learn more higher math in order to flesh out the connections to complex numbers, negative length strings, and quantum mechanics.
[1] Of which I can conspicuously think of no examples offhand which aren't also simply-fallible. I'm sure there're some out there. We could construct one by having a new kind of List
which has a singleton constructor instead of nil, and some tree ADTs are similar, though I'm not sure how "natural" these are for expressing any particular variety of computation in practice.