winterkoninkje: shadowcrane (clean) (Default)
2013-02-02 10:47 pm

Why integrate?

Yesterday I was trying to explain some of the paradoxes of probability theory to a friend who disbelieves in the real numbers. It's not always clear whether this disbelief is actual, or if it's just an affectation; constructivist and devil's-advocate that he is, it could go either way really. In any case, it's always amusing to spar with (not that I have any especial concern for the un/reality of the reals). Midway through, Dylan Thurston came over to listen in and raised a question I've mulled over before but have been turning over again and again since then. What is it that I mean when describing a space (as opposed to a function etc) as "continuous"?

The knee-jerk response is that continuity is the antithesis of discreetness. That is, given some collection or space or other arrangement of things, often we are interested in accumulating some value over the lot of them. In the easiest setting, finite collections, we just sum over each element of that collection. But this process isn't limited to finite collections; we sum over infinite collections like the natural numbers with nary a care, and use the same large sigma notation to do so. So mere countable infinity isn't a problem for the notion of summation or accumulation. In programming we oft take our infinitudes even further. There's nothing special about the natural numbers. We can sum over the collection of trees, or lists, or any other polynomial type with just as little (or as much) concern for how many values are in these types as for how many natural numbers there are. But at some point this breaks down. Somewhere between the polynomial types and the real numbers, everything falls apart. We cannot in any meaningful sense use large sigma to accumulate a value over the vast majority of subsets of the reals. Instead we must turn to a different notion of accumulation: integration. For discrete collections summation is fine, but when we enter the continuous setting we must switch to integration.

The problem, of course, is that integrals are not really well-defined. Regardless of your choice of formalization, they all run into paradoxes and problems[1]. One of these problems rears its head in that probability theoretic paradox I was attempting to explain. Namely, the conception of inhabited sets of measure zero. The paradox arises even before probabilities rear their head. Colloquially, integrals are the area under a curve over some interval of the curve's domain. How do we get the area of some curvy shape? Well, we can approximate the shape by making a bunch of rectangles; and our approximation becomes better and better as those rectangles become thinner and thinner. In the limit, this approximation matches the actual shape and so we can get its area. But, in the limit, those rectangles have thickness zero; and thus, they must have area zero. So how is it that summing all those slivers with area zero can ever result in a non-zero total area? Thus, is the paradox.

But pulling things back to the original question: what does it mean for a space to be continuous in the first place? What is it ---exactly--- that causes summation to fail and forces us into this problematic regime of integration? Is the notion of continuity or of the reals or of infinite divisibility or however you want to phrase it, is the notion itself a hack? And if it is a hack, then how do we get away from it? Classical mathematicians are fond of hacks but, while I respect a good hack, as a constructivist myself I prefer to be on surer footing than simply believing something must be the case since the alternative is too absurd to conceive of. So, why do we integrate? I've yet to find a reason I can believe in...

[1] One can make the same complaint about logics (and other areas of mathematics) too. Impredicativity is much the same as the situation in probability theory; the idea is so simple and obvious that we want to believe in it, but to do so naively opens the door to demonstrable unsoundness. The liar's paradox is another close analogy, what with making perfect sense except in the limit where everything breaks down. Indeed, the paradoxes of impredicativity are of the exact same sort as the liar's paradox. But in spite of all these issues, we do not usually say that logic is ill-defined; so perhaps my judgment of calculus is unfair. Though, to my knowledge, people seem to have a better handle on the problems of logic. Or perhaps it's just that the lack of consensus has led to the balkanization of logic, with constructivists and classicalists avoiding one another, whereas in calculus the different sides exchange ideas more freely and so the confusion and disagreements are more in the open...

winterkoninkje: shadowcrane (clean) (Default)
2012-01-26 07:29 pm

On Laplace's method of reasoning under uncertainty

Only recently, thanks to the computer, has it become feasible to solve real, nontrivial problems of reasoning from incomplete information, in which we use probability theory as a form of logic in situations where both intuition and "random variable" probability theory would be helpless. This has brought out the facts in a way that can no longer be obscured by arguments over philosophy. One can always argue with a philosophy; it is not so easy to argue with a computer printout, which says to us: "Independently of all your philosophy, here are the facts about what this method actually gives when applied."

Daaamn. That's some gettin' told right there.

The above quote (emphasis added) is from the eminently readable Probability in Quantum Theory by E.T. Jaynes, which presents a critique and alternative perspective on the role of probability within quantum mechanics. If you've any interest in philosophy of science or the philosophical disputes between frequentism and Bayesianism, even if you've no real knowledge of physics, then I highly recommend reading it. While the frequentist vs Bayesianist argument is well-known of, the details of what is actually at stake are less well-known and often quite subtle. I think the author does a good job of bringing out and highlighting what the argument is about, and why it is relevant to the future of science (especially physics).

For my part, I've been well indoctrinated into the Bayesian philosophy. This semester I'm taking a course on frequentism, or rather on "experimental methods" as they call it. A professor here has been pushing hard for Bayesian methods in behavioral sciences, and the professor of my class delights in teasing him about it (though he admits to no investment in the philosophical debate). It's been a very long time since I've seen the frequentist perspective, and I'm always of the opinion that it's good to keep an eye on one's philosophical enemies. I've known that frequentism has long dominated the behavioral sciences, but I must shamefully admit that I've attributed this to them being "soft" (even for as much as I identify with my undergrad training in anthropology and humanities). However, coming from machine learning where Bayesianism is de rigueur, one thing I found startling about the article is that, apparently, in physics too it's the frequentists who've dominated the conversation for decades. Indeed, as Jaynes portrays it, it's the frequentists who ousted Laplace, rather than the other way around as is portrayed in AI/ML circles.