In formal treatments of lazy functional languages like Haskell, appeal is often made to domain theory. For those who aren't familiar, a domain is nothing more than a set equipped with an ordering relation that obeys certain laws. The ordering relation captures a notion of "informativeness" whereby "greater" elements are more defined or more known-about than "lesser" elements. One of the particular laws is that the domain must have an element, ⊥, which is less than all others (and therefore contains no information other than what domain you're in).
Another way to think about domain theory is from the context of logic programming. The domains we often encounter in functional programming are defined structurally. That is, given an algebraic data type, we can construct a domain for the type where the elements of the domain are generated by its initial algebra on a set of variables representing unknown substructure. Indeed, this is usually the domain we are most interested in. In this case, ⊥ is simply a free variable (and thus, is actually ⊤). If we treat the variables as logic variables, then the domain ordering is defined by the subsumption relation induced by unification (aka pattern matching). Thus, a domain is simply the set of all patterns we could write for some type, and the subsumption relation between those patterns.
In Haskell we often conflate "⊥" as a domain element we know nothing about, and "⊥" as a non-terminating computation. There are natural reasons for doing so, but it's unfortunate because it loses a critical distinction between what is already-known and what is knowable. This raises questions about how much of our domain-theoretic training we can carry over to total lazy languages, a topic I've become quite interested in lately.
The idea that non-termination and free variables are the same comes from the idea that if we try to case match on an unbound variable then we will deadlock, waiting for that variable to become bound to something in order to proceed. In a fully interpreted language this holds up fairly well. However, in languages which aren't fully interpreted (e.g. dependently typed languages, languages with non-ground terms, parallel data-flow languages) it doesn't hold up because variables serve multiple roles. Depending on the context, we will want to think of a variable as either being bound to no element of the domain, or as being bound nondeterministically to every element of the domain. The former matches our functional programming intuitions about non-termination, whereas the latter matches our logic programming intuitions about non-ground terms presenting the type of all the terms they subsume.
In particular, the idea of variables bound to everything allows us to consider the idea of a hypothetical input: an input about which we know nothing, but which we are permitted to inspect for more information. Case analysis on hypothetical variables is the same as reasoning by cases in proof theory. These are different from abstract variables, which are values about which we know nothing and which we are not permitted to inspect. Loosely speaking, hypothetical variables are those introduced by universal quantification, whereas abstract variables are those introduced by existential quantification. (However, existentially bound variables may still by hypothetically analyzable rather than being held abstract; it depends on the situation.)
In a total language we are not permitted to write computations that we cannot prove will terminate. Thus, we can never obtain a nonterminating 'value' inhabiting the bottom element of a domain. However, we can still use bottom to represent things we hold abstract or which we don't force the evaluation of. Thus, in a total lazy language, domain theory should still have a role in strictness analysis and certain other formal reasoning. Given a domain we can interpret it hypothetically, as we do when performing strictness analysis or reasoning by cases, or we can interpret it abstractly, as we do when performing termination analysis. This conflict feels very similar to the conflict between parametricity and dependency. I wonder if any work has been done on this domain theoretic duality?