winterkoninkje: shadowcrane (clean) (Default)

Yesterday I was trying to explain some of the paradoxes of probability theory to a friend who disbelieves in the real numbers. It's not always clear whether this disbelief is actual, or if it's just an affectation; constructivist and devil's-advocate that he is, it could go either way really. In any case, it's always amusing to spar with (not that I have any especial concern for the un/reality of the reals). Midway through, Dylan Thurston came over to listen in and raised a question I've mulled over before but have been turning over again and again since then. What is it that I mean when describing a space (as opposed to a function etc) as "continuous"?

The knee-jerk response is that continuity is the antithesis of discreetness. That is, given some collection or space or other arrangement of things, often we are interested in accumulating some value over the lot of them. In the easiest setting, finite collections, we just sum over each element of that collection. But this process isn't limited to finite collections; we sum over infinite collections like the natural numbers with nary a care, and use the same large sigma notation to do so. So mere countable infinity isn't a problem for the notion of summation or accumulation. In programming we oft take our infinitudes even further. There's nothing special about the natural numbers. We can sum over the collection of trees, or lists, or any other polynomial type with just as little (or as much) concern for how many values are in these types as for how many natural numbers there are. But at some point this breaks down. Somewhere between the polynomial types and the real numbers, everything falls apart. We cannot in any meaningful sense use large sigma to accumulate a value over the vast majority of subsets of the reals. Instead we must turn to a different notion of accumulation: integration. For discrete collections summation is fine, but when we enter the continuous setting we must switch to integration.

The problem, of course, is that integrals are not really well-defined. Regardless of your choice of formalization, they all run into paradoxes and problems[1]. One of these problems rears its head in that probability theoretic paradox I was attempting to explain. Namely, the conception of inhabited sets of measure zero. The paradox arises even before probabilities rear their head. Colloquially, integrals are the area under a curve over some interval of the curve's domain. How do we get the area of some curvy shape? Well, we can approximate the shape by making a bunch of rectangles; and our approximation becomes better and better as those rectangles become thinner and thinner. In the limit, this approximation matches the actual shape and so we can get its area. But, in the limit, those rectangles have thickness zero; and thus, they must have area zero. So how is it that summing all those slivers with area zero can ever result in a non-zero total area? Thus, is the paradox.

But pulling things back to the original question: what does it mean for a space to be continuous in the first place? What is it ---exactly--- that causes summation to fail and forces us into this problematic regime of integration? Is the notion of continuity or of the reals or of infinite divisibility or however you want to phrase it, is the notion itself a hack? And if it is a hack, then how do we get away from it? Classical mathematicians are fond of hacks but, while I respect a good hack, as a constructivist myself I prefer to be on surer footing than simply believing something must be the case since the alternative is too absurd to conceive of. So, why do we integrate? I've yet to find a reason I can believe in...

[1] One can make the same complaint about logics (and other areas of mathematics) too. Impredicativity is much the same as the situation in probability theory; the idea is so simple and obvious that we want to believe in it, but to do so naively opens the door to demonstrable unsoundness. The liar's paradox is another close analogy, what with making perfect sense except in the limit where everything breaks down. Indeed, the paradoxes of impredicativity are of the exact same sort as the liar's paradox. But in spite of all these issues, we do not usually say that logic is ill-defined; so perhaps my judgment of calculus is unfair. Though, to my knowledge, people seem to have a better handle on the problems of logic. Or perhaps it's just that the lack of consensus has led to the balkanization of logic, with constructivists and classicalists avoiding one another, whereas in calculus the different sides exchange ideas more freely and so the confusion and disagreements are more in the open...

Date: 2013-02-04 03:20 pm (UTC)From: [identity profile] qedragon.livejournal.com
Lebesgue integration may offer a way out for some functions (namely, those with discrete outputs, even if not connected), as there you calculate a weighted sum over the measures of the supports of each output. This is one way to argue that the integral from 0 to 1 of the indicator function of the irrationals is 1, for example. Sadly, while you never quite end up with rectangles of width zero, you do something sufficiently equivalent in general (IIRC).

But I'm not so sure there's a paradox after all: we never actually consider the rectangles of width zero (or the analog in Lebesgue), as we are taking the limit as (something) approaches zero. All the paradox really says is that the summation is discontinuous at width=0, which is possibly less alarming?

April 2019

S M T W T F S
 123456
78910111213
14151617181920
212223242526 27
282930    

Tags

Page generated 5 Jun 2025 06:09 am
Powered by Dreamwidth Studios