Over on Reddit there's a discussion where one commenter admitted:

"the wholeIt's clear to me, but it's something they don't teach in primary school, and it's something most programming languages fail to distinguish. The main problem, I think, for both primary ed and for other PLs, is that they have an impoverished notion of what "numbers" could be, and this leads to artificially conflating things which should be kept distinct. I wrote a reply over on Reddit, hoping to elucidate the distinction, but I thought I should repeat it in more persistent venue so it can reach a wider audience.`(^)`

vs`(^^)`

vs`(**)`

[distinction in Haskell] confuses me."

First, let us recall the basic laws of powers:

a^0 = e
a^1 = a
(a^x)^y = a^(x*y)
(a^x)*(a^y) = a^(x+y)
(a*b)^x = (a^x)*(b^x)

There are two very important things to notice here. First off, our underlying algebra (the `a`

s and `b`

s) only needs to have the notion of multiplication, `(*)`

, with identity, `e`

. Second, our powers (the `x`

s and `y`

s) have an entirely different structure; in particular, they form at least a semiring `(+,0,*,1)`

. Moreover, if we're willing to give up some of those laws, then we can weaken these requirements. For example, if we get rid of `a^0 = e`

then we no longer need our underlying algebra to be a monoid, being a semigroup is enough. And actually, we don't even need it to be a semigroup. We don't need full associativity, all we need for this to be consistent is power-associativity.

So we can go weaker and more abstract, but let's stick here for now. Any time we have a monoid, we get a notion of powers for free. This notion is simply iterating our multiplication, and we use the commutative semiring `(Natural,+,0,*,1)`

in order to represent our iteration count. This is the notion of powers that Haskell's `(^)`

operator captures. Unfortunately, since Haskell lacks a standard `Natural`

type (or `Semiring`

class), the type signature for `(^)`

lies and says we could use `Integer`

(or actually, `Num`

which is the closest thing we have to `Ring`

), but the documentation warns that negative powers will throw exceptions.

Moving on to the `(^^)`

operator: suppose our monoid is actually a group, i.e. it has a notion of reciprocals. Now, we need some way to represent those reciprocals; so if we add subtraction to our powers (yielding the commutative ring `(Integer,+,-,0,*,1)`

), we get the law `a^(-x) = 1/(a^x)`

. The important thing here is to recognize that not all monoids form groups. For example, take the monoid of lists with concatenation. We do have a `(^)`

notion of powers, which may be more familiar as the `replicate`

function from the Prelude. But, what is the reciprocal of a string? what is the inverse of concatenation? The `replicate`

function simply truncates things and treats negative powers as if they were zero, which is on par with `(^)`

throwing exceptions. It is because not all monoids are groups that we need a notion of powers for monoids (i.e., `(^)`

) which is different from the notion of powers for groups (i.e., `(^^)`

). And though Haskell fails to do so, we can cleanly capture this difference in the type signatures for these operations.

Further up, we get another notion of powers which Haskell doesn't highlight; namely the notion of powers that arises from the field `(Rational,+,-,0,*,/,1)`

. To get here, we take our group and add to it the ability to take roots. The fractions in powers are now taken to represent the roots, as in the law `a^(1/y) = root y a`

. Again note that there's a vast discrepancy between our underlying algebra which has multiplication, reciprocals, and roots vs our powers which have addition, subtraction, multiplication, and division.

Pulling it back a bit, what if our monoid has a notion of roots, but does not have inverses? Here our powers form a semifield; i.e., a commutative semiring with multiplicative inverses; e.g., the non-negative rationals. This notion is rather obscure, so I don't fault Haskell for lacking it, though it's worth mentioning.

Finally, `(**)`

is another beast altogether. In all the previous examples of powers there is a strong distinction between the underlying algebra and the powers over that algebra. But here, we get exponentiation; that is, our algebra has an *internal* notion of powers over *itself!* This is remarkably powerful and should not be confused with the basic notion of powers. Again, this is easiest to see by looking at where it fails. Consider multiplication of (square) matrices over some semiring. This multiplication is associative, so we can trivially implement `(^)`

. Assuming our semiring is actually a commutative ring then almost all (though not all) matrices have inverses, so we can pretend to implement `(^^)`

. For some elements we can even go so far as taking roots, though we run into the problem of there being multiple roots. But as for exponentiation? It's not even clear that `(**)`

should be meaningful on matrices. Or rather, to the extent that it is meaningful, it's not clear that the result should be a matrix.

N.B., I refer to `(**)`

as exponentials in contrast to `(^)`

, `(^^)`

, etc as powers, following the standard distinction in category theory and elsewhere. Do note, however, that this notion of exponentials is different again from the notion of the antilog `exp`

, i.e. the inverse of `log`

. The log and antilog are maps between additive monoids and multiplicative monoids, with all the higher structure arising from that. We can, in fact, give a notion of antilog for matrices if we assume enough about the elements of those matrices.

## no subject

Date: 2013-07-21 04:34 pm (UTC)From:w3future.blogspot.com## no subject

Date: 2013-07-22 06:41 am (UTC)From:winterkoninkje## no subject

Date: 2013-07-22 04:40 pm (UTC)From:twanvl## no subject

Date: 2013-07-22 09:43 pm (UTC)From:winterkoninkjeThat works fairly wellâ€” But, in general, the left argument can be far simpler. All of the power operations only assume, on the left, the existence of a single binary operator (semigroup, monoid, group,...); they don't require two (semiring, ring, field,...). This means the Haskell types are far too stringent in requiring

`Num`

(~= ring) or`Fractional`

(~= field). Mathematically, these operators should have the types (equivalent to):Where the natural numbers with addition and multiplication are the most salient commutative semiring, and the integers with addition/subtraction and multiplication are the most salient commutative ring. The presumed

`Ring`

and`Semiring`

classes need not be commutative, though commutativity is required to validate the`(a^x)^y = (a^y)^x`

law.The reason I think it's important to really drive this point home is because the whole idea of powers extends to other monoids besides just multiplication. For instance, consider Presberger arithmetic: the natural numbers with addition only. In PA we get a weak notion of "multiplication", which is specifically the one that arises from addition-powers. The fact that it's an addition-power rather than true multiplication captures much about its weakness. For another example, we often talk about tensor-powers: the notion of powers that arise from the tensor product. For yet another, we often use notation like

`f^n(x)`

to mean that we should iterate the function some number of times. Naturally, endomorphisms should use`(^)`

whereas automorphisms can use`(^^)`

.Edited Date: 2013-07-22 09:57 pm (UTC)