winterkoninkje: Shadowcrane (Default)

I ended up spending this weekend learning a lot about topos theory. Why? I have no idea. But for whatever reason, this happened. Like many areas of category theory, the biggest hurdle is figuring out the terminology— in particular, figuring out what terminology you actually need to know now vs the terminology you can put off learning until later.

So, with that in mind, I threw together a quick sketch of topos/logos theory. I should emphasize that this is only a very quick sketch. It shows when one kind of category is also another kind of category, but for the most part[1] it doesn't say what the additional characteristics of the subkind are (unlike in my other terminology maps). One major thing missing from this sketch is a notation for when one kind of category is exactly the intersection of two other kinds (e.g., a pretopos is precisely an extensive exact category). Once I figure out how to show that in PSTricks, I'll post a new version. As before, the dashed lines indicate things you get for free (e.g., every pretopos is coherent for free). The dotted line from Heyting categories to well-powered geometric categories is meant to indicate that, technically, all WPG categories are also Heyting categories, but the Heyting structure is not preserved by geometric functors and therefore should not be "relied on" when reasoning about WPG categories as a whole. And finally, the table at the bottom shows what operators exist in the internal logic of these various categories, as well as what structure the subobject posets have.

Despite being far less polished than my other maps, hopefully it'll give you enough to go on so that you can read through the pages at n-lab in the correct order.

[1] For the arcs which are explained, the following abbreviations are used: "So.C." = subobject classifier; "AMC" = axiom of multiple choice; "NNO" = natural numbers object.; "LCCC" = locally cartesian closed.

winterkoninkje: Shadowcrane (Default)

Saturday night I had a fainting spell. Sunday my eyes were burning, I was feverish, weak, and had the beginnings of a migraine. Monday was completely lost in the blaze of a migraine. Tuesday I was starting to feel better— and then, nope; threw up that night. Wednesday I awoke with what felt like a raging sinus infection; spent the whole day in a haze of sudafed and ibuprofen, and went through literally an entire box of tissues.

Starting to feel a little better this morning, so I figure this is the end. It was a nice life. Y'might want to bar your doors today, just in case it's locusts.

winterkoninkje: Shadowcrane (Default)

I've been mucking about with real analysis lately. One of the things I've always hated about real analysis (and complex analysis, and other calculus) is the unrepentant lack of types. It's easy to get confused about what's going on and what goes where when you think everything is a real number (or complex number, or tensors thereof). In particular, because of this "everything is a real array" assumption, libraries for performing optimization etc end up with utterly inscrutable APIs and other nastiness like needing to manually flatten your parameters into a single array. So this has got me thinking, what should the API for optimization libraries look like? Which in turn brings me back to an age-old conundrum I've toyed around with before, namely: what is the type of the derivative operator?

From the "everything is a real number" perspective we'd say deriv : (ℝ ↝ ℝ) → ℝ → ℝ, where the A ↝ B function space are "nice" functions (e.g., continuous, smooth, analytic, whatever). However, this only works out because we're conflating a number of important differences. For example, for any affine space A we have the difference space ∆A— that is, values in ∆A denote differences or differentials of values in A (e.g., for any a : A and b : A we have a-b : ∆A). However, it turns out that there's a bijection between and ∆ℝ. It also turns out that the difference of arrays, ∆(ℝn), is isomorphic to an array of differences, (∆ℝ)n. Putting these together we get the common conflation between points (n) and vectors (∆(ℝn)). For another example, consider linear transformations, written A ⊸ B. We have a bijection between linear transformations (m ⊸ ℝn) and matrices (n×m), and hence more specifically a bijection between and ℝ ⊸ ℝ.

So here's what I'm currently thinking:

deriv : (F X ↝ Y) → F X → F(∆X ⊸ ∆Y)

For now, just ignore F; instantiate it as the identity functor. Thus, the total derivative of f : X ↝ Y at the point x : X is a linear transformation from minor perturbations about x to minor perturbations about f x. We can think of ∆X ⊸ ∆Y as being the "slope" of this mapping between minor perturbations. In particular, when X=ℝ and Y=ℝ, we get a bijection between ∆ℝ ⊸ ∆ℝ and , so this coincides with the traditional notion of the slope of a one-dimensional curve.

Now, let's think about the gradient rather than the total derivative. Assume that F is a "nice" functor (i.e., representable, traversable, etc). Because the functor is representable, we have an isomorphism between F A and I → A (natural in A), where I is the type of positions/indices in F. Thus, the gradient of a function g : F X ↝ Y at the point z : F X is essentially a function from F-indices, i : I, to the i-th partial derivative of g at z. Why partial derivative? Because the linear transformation only takes ∆X as an argument —not ∆(F X)—, thus we can only modify a single "scalar" at a time, and the other scalars in z must remain fixed.

Edit 2014.02.17 (#1): Actually, things are a bit more complicated than the above. For example, consider a function f : ℝn ↝ ℝ. The derivative of f with respect to its vector of inputs is not a vector of partial derivatives— rather, it's a covector of partial derivatives! (i.e., deriv (f : ℝn×1 ↝ ℝ) : ℝn×1 → ℝ1×n.) Thus, we should really have that the return type of deriv is some Fop(∆X ⊸ ∆Y) where Fop is some sort of "dual" of F. It's not clear a priori which notion of "duality" is in play here, however.

Edit 2014.02.17 (#2): And here's what I'm currently thinking about how to incorporate the Jacobian into the above. Consider this particular instance of the derivative operator, deriv : (F X ↝ G Y) → F X → Fop(∆X ⊸ ∆(G Y)). When G is some type of vectors (e.g., G Y = Yn for some fixed n), we get ∆(G Y) = G(∆Y) as mentioned before. And let's assume ∆X ⊸ G(∆Y) = G(∆X ⊸ ∆Y); presumably this follows immediately by linearity, though I haven't verified that yet. And finally, in our standard real analysis we get that G∘Fop and Fop∘G are the same— that is, vectors-of-covectors and covectors-of-vectors are essentially the same thing; they're just the row-major and column-major representations of matrices, respectively. And then, putting all these together we get that the original return type Fop(∆X ⊸ ∆(G Y)) is isomorphic to G(Fop(∆X ⊸ ∆Y)). So when X=ℝ and Y=ℝ we get the expected deriv : (ℝm ↝ ℝn) → ℝm → ℝn×m. Now, it remains to show how this extends to other choices of F and G...

Edit 2014.02.17 (#3): It was pointed out on reddit that the above is "well known"— at least for those who are familiar with differentiable manifolds and tangent spaces. Now, I don't know much about manifolds, but it certainly wasn't well-known to me— which, in itself, says a lot about how little this knowledge has spread to the rest of mathematics. I'm glad to hear there's some followup reading to be done here, but I still don't think the necessary work has been done in terms of turning this into a decent API for optimization libraries.

winterkoninkje: Shadowcrane (Default)

So there was a discussion recently on the libraries mailing list about how to deal with MonadPlus. In particular, the following purported law fails all over the place: x >> mzero = mzero. The reason it fails is that we are essentially assuming that any "effects" that x has can be undone once we realize the whole computation is supposed to "fail". Indeed this rule is too strong to make sense for our general notion that MonadPlus provides a notion of choice or addition. I propose that the correct notion that MonadPlus should capture is that of a right-seminearring. (The name right-nearsemiring is also used in the literature.) Below I explain what the heck a (right-)seminearring is.

Monoids

First, I will assume you know what a monoid is. In particular, it's any associative binary operation with a distinguished element which serves as both left- and right-identity for the binary operation. These are ubiquitous and have become fairly well-known in the Haskell community of late. A prime example is (+,0) —that is, addition together with the zero element; for just about any any notion of "numbers". Another prime example is (*,1)— multiplication together with unit; again, for just about any notion of "numbers".

An important caveat regarding intuitions is that: both "addition" and "multiplication" of our usual notions of numbers turn out to be commutative monoids. For the non-commutative case, let's turn to regular expressions (regexes). First we have the notion of regex catenation, which captures the notion of sequencing: first we match one regex and then another; let's write this as (*,1) where here we take 1 to mean the regex which matches only the empty string. This catenation of strings is very different from multiplication of numbers because we can't swap things around. The regex a*b will first match a and then match b; whereas the regex b*a will match b first. Nevertheless, catenation (of strings/sequences/regexes/graphs/...) together with the empty element still forms a monoid because catenation is associative and catenating the empty element does nothing, no matter which side you catenate on.

Importantly, the non-deterministic choice for regexes also forms a monoid: (+,0) where we take 0 to be the absurd element. Notably, the empty element (e.g., the singleton set of strings, containing only the empty string) is distinct from the absurd element (e.g., the empty set of strings). We often spell 1 as ε and spell 0 as ; but I'm going to stick with the arithmetical notation of 1 and 0.

Seminearrings

Okay, so what the heck is a right-seminearring? First, we assume some ambient set of elements. They could be "numbers" or "strings" or "graphs" or whatever; but we'll just call them elements. Second, we assume we have a semigroup (*)— that is, our * operator is associative, and that's it. Semigroups are just monoids without the identity element. In our particular case, we're going to assume that * is non-commutative. Thus, it's going to work like catenation— except we don't necessarily have an empty element to work with. Third, we assume we have some monoid (+,0). Our + operator is going to act like non-deterministic choice in regexes— but, we're not going to assume that it's commutative! That is, while it represents "choice", it's some sort of biased choice. Maybe we always try the left option first; or maybe we always try the right option first; or maybe we flip a biased coin and try the left option first only 80% of the time; whatever, the point is it's not entirely non-deterministic, so we can't simply flip our additions around. Finally, we require that our (*) semigroup distributes from the right over our (+,0) monoid (or conversely, that we can factor the monoid out from under the semigroup, again only factoring out parts that are on the right). That is, symbolically, we require the following two laws to hold:

0*x = 0
(x+y)*z = (x*z)+(y*z)

So, what have we done here? Well, we have these two interlocking operations where "catenation" distributes over "choice". What the first law mean is that: (1) if we first do something absurd or impossible and then do x, well that's impossible. We'll never get around to doing x so we might as well just drop that part. The second law means: (2) if we first have a choice between x and y and then we'll catenate whichever one with z, this is the same as saying our choice is really between doing x followed by z vs doing y followed by z.

MonadPlus

Okay, so what does any of this have to do with MonadPlus? Intuitively, our * operator is performing catenation or sequencing of things. Monads are all about sequencing. So how about we use the monad operator (>>) as our "multiplication"! This does what we need it to since (>>) is associative, by the monad laws. In order to turn a monad into a MonadPlus we must define mplus (aka the + operator) and we must define a mzero (aka the 0 element). And the laws our MonadPlus instance must uphold are just the two laws about distributing/factoring on the right. In restating them below, I'm going to generalize the laws to use (>>=) in lieu of (>>):

mzero >>= f = mzero
(x `mplus` y) >>= f = (x >>= f) `mplus` (y >>= f)

And the reason why these laws make sense are just as described before. If we're going to "fail" or do something absurd followed by doing something else, well we'll never get around to that something else because we've already "failed". And if we first make a choice and then end up doing the same thing regardless of the choice we made, well we can just push that continuation down underneath the choice.

Both of these laws make intuitive sense for what we want out of MonadPlus. And given that seminearrings are something which have shown up often enough to be named, it seems reasonable to assume that's the actual pattern we're trying to capture. The one sticking point I could see is my generalization to using (>>=). In the second law, we allow f to be a function which "looks inside" the monad, rather than simply being some fixed monadic value z. There's a chance that some current MonadPlus implementations will break this law because of that insight. If so, then we can still back off to the weaker claim that MonadPlus should implement a right-seminearring exactly, i.e., with the (>>) operator as our notion of multiplication/catenation. This I leave as an exercise for the reader. This is discussed further in the addendum below.

Notably, from these laws it is impossible to derive x*0 = 0, aka x >> mzero = mzero. And indeed that is a stringent requirement to have, since it means we must be able to undo the "effects" of x, or else avoid doing those "effects" in the first place by looking into the future to know that we will eventually "fail". If we could look into the future to know we will fail, then we could implement backtracking search for logic programming in such a way that we always pick the right answer. Not just return results consistent with always choosing the right answer, which backtracking allows us to do; but rather, to always know the right answer beforehand and so never need to backtrack! If we satisfy the x*0 = 0 law, then we could perform all the "search" during compile time when we're applying the rewrite rule associated with this law.

Addendum

There's a long history of debate between proponents of the generalized distribution law I presented above, vs the so-called "catch" law. In particular, Maybe, IO, and STM obey the catch law but do not obey the generalized distribution law. To give an example, consider the following function:

f a' = if a == a' then mzero else return a'

Which is used in the following code and evaluation trace for the Maybe monad:

mplus (return a) b >>= f
⟶ Just a >>= f
⟶ f a
⟶ if a == a then mzero else return a
⟶ mzero

As opposed to the following code and evaluation trace:

mplus (return a >>= f) (b >>= f)
⟶ mplus (f a) (b >>= f)
⟶ mplus mzero (b >>= f)
⟶ b >>= f

But b >>= f is not guaranteed to be identical to mzero. The problem here is, as I suspected, because the generalized distribution law allows the continuation to "look inside". If we revert back to the non-generalized distribution law which uses (>>), then this problem goes away— at least for the Maybe monad.

Second Addendum (2014.02.06)

Even though Maybe satisfies the non-generalized distributivity laws, it's notable that other problematic MonadPlus instances like IO fail even there! For example,

First consider mplus a b >> (c >> mzero). Whenever a succeeds, we get that this is the same as a >> c >> mzero; and if a fails, then this is the same as a' >> b >> c >> mzero where a' is the prefix of a up until failure occurs.

Now instead consider mplus (a >> c >> mzero) (b >> c >> mzero). Here, if a succeeds, then this is the same as a >> c >> b >> c >> mzero; and if a fails, then it's the same as a' >> b >> c >> mzero. So the problem is, depending on whether we distribute or not, the effects of c will occur once or twice.

Notably, the problem we're running into here is exactly the same one we started out with, the failure of x >> mzero = mzero. Were this law to hold for IO (etc) then we wouldn't run into the problem of running c once or twice depending on distributivity.

winterkoninkje: Shadowcrane (Default)

Last friday I passed my qualifying examinations! So now, all I have left is a bunch of paperwork about that and then proposing, writing, and defending the dissertation itself. So, in about a year or so I'll be on the job market. And, much as I despise job hunting, I can't wait!

Since defending the quals I've been spending far too much time playing Persona 3 Portable. I've played P3FES, but P3P adds a female protagonist option which changes a bunch of the social interactions, so I've been playing through that side of things. Other than the heterosexual assumptions about the relationships, I've been loving it. More rpgs should have female protagonists. That's one of the reasons I've always loved FF6. Also a big part of why I found FF13 compelling. (Though, tbh: while Lightning is awesome as a protagonist, Vanille is definitely my favorite character :) And a big part of the powerfulness of Kreia as a character in KotOR2 stems from her interactions with the canonically-female protagonist.

Speaking of women. I've been presenting as female for a couple months now, and since I have no intention of stopping nor hiding that fact, I've decided to move T-Day forward. Basically, for those who haven't already switched over to the right pronouns etc: T-Day is today. I've sent emails to the department heads in order to get them to send out the "official" memo; so if you haven't gotten it yet, that should show up on monday or tuesday.

The next couple months are going to be hectic with paper writing. I'm hoping to get a paper on syntax-based sentiment-analysis using matrix-space semantics into one of the CL conferences with deadlines this March. No Haskell involved in that one, though I'll probably spend a few posts discussing the semantic model, which may be of interest to y'all. I'm also planning on getting the work from my first qual paper published; that paper was about Posta, a functional library for interactive/online/incremental tagging with HMMs. Here I'm planning to target journals rather than conferences, and it'll spread out over a few papers: one on the overall system (which I need to actually push up to Hackage), one on the higher-order anytime n-best extraction algorithm, and one on reformulating HMM algorithms in terms of foldl and scanl (this may be combined with the HO-AnB paper, length permitting). All of these would be targeting the linguistics audience. Using folds and scans is old-hat in functional programming; my particular goal with that paper is exposing linguists to the tools of FP and how they can be used to greatly simplify how we describe our algorithms. Once those are out of the way I might also see about writing up a functional pearl on the smoothing library I presented at AMMCS a few years back.

winterkoninkje: Shadowcrane (Default)

Now that the hectic chaos of last semester is past, I've been thinking of getting back into blogging again. I started tweeting a few months back, and since then I've gotten back into activism and gathered a new batch of friends who talk about all manner of interesting things. Both of these —friendships and activism— have long been motivating forces for my thinking and writing. And so, excited by tweeting, I've been wanting to write again in a less ephemeral medium.

But I face something of a conundrum.

When I started blogging it was always just a place for me to ramble out thoughts on whatever passes through my head. It was never my goal to keep the blog focused on any particular topic. After leaving Portland, and lacking the wide network of friends I was used to there, I dove headlong into the Haskell community. In addition, a few years back, I started working on a major Haskell project (from which most of my published Haskell code derives). So, for a time, the vast majority of my blogging was dominated by Haskell, which is why I signed up to be syndicated on Haskell Planet.

To be clear, I have no intention of leaving the Haskell community for the foreseeable future. I still use Haskell regularly, still teach it to others, etc. However, of late, my thoughts have been elsewhere. Computationally I've been focusing more on type theory and category theory themselves, rather than their uses and applications in Haskell per se. Linguistically I've been looking more into semantic issues, as well as some of the newer models for incorporating syntax into NLP. Sociologically I've been, as I said, thinking a lot more about social justice issues. Not to mention more casual things like reading, gaming, cooking, and all that.

Back when I joined the Planet it was pretty active and had lots of material which was only loosely related to Haskell; e.g., all the musicians and live coders who used Haskell for their work. I loved this wide-ranging view of Haskell, and this diversity is a big part of why I fell in love with the community. In such an environment, I think my blog fits well enough. However, over the years I've noticed the Planet becoming far more narrow and focused on code development alone. I think Phil Wadler is probably the only one who goes on about other stuff these days. Given this trend, it's not so clear that my ramblings would mesh well with the Planet as it stands.

So that's where I'm at. Not sure whether to quit syndicating to Haskell Planet, or to make a special filtered feed for the Haskell-only stuff, or what. If you have any opinions on the matter, please do comment. Otherwise I'll prolly just start writing and wait for people to complain.

winterkoninkje: Shadowcrane (Default)

Heads up y'all. My email at Free Geek will be going away in about a month. The following addresses are all still valid and point to the right place:

  • cpan.org — for Perl folks
  • community.haskell.org — for Haskell folks
  • alumni.reed.edu — for Reedies
  • indiana.edu — for business
  • gmail.com — for friends
If you don't have one of those, email me (at freegeek.org) and I'll hook you up.

winterkoninkje: Shadowcrane (Default)

There are three parts to this post. Everyone should read the first two sections, especially the second section. Haskellers should also be sure to read the third section.

The Announcement

If you don't yet know: I'm transgender. My sense of gender and self have never aligned with my physical appearance, and I’ve spent most of my life dealing with this fact. This is not an acquired condition nor a recent change; it is an intrinsic and life-long part of who I am. I began the process of transitioning half a year ago and, over the next six months or so, I will complete the transition to living as a woman full-time.

Many of my followers are already familiar with transgender issues, but since this is a public announcement I assume many of you are not. There are numerous resources online for learning more, but I find the PFLAG pamphlet to be a particularly good place to start. If you still have any questions after reading that, I can provide additional resources and am willing to answer questions.

How to respond

This is going to depend on how you know me.

If we interact predominantly online
This includes everyone in the Haskell community (both online and academically), as well as everyone from Reddit, Twitter, etc. Henceforth, please use feminine pronouns (she/her/hers) exclusively when referring to me. I understand this will take some getting used to, but it will soon become second nature.
If we interact predominantly in person
I'd prefer you use feminine pronouns (she/her/hers) when referring to me, especially when online and when mentioning me anonymously. But, for the time being, masculine pronouns (he/him/his) are still acceptable. Sometime in the spring I will send another announcement around letting you know when "T-day" is. After that date, I will be presenting as female full-time and will no longer tolerate masculine pronouns.

PSA for Haskellers

I shouldn't have to say this, but since there were some complaints about the "homosexual propaganda" in my recent posts, may I remind my readers of The Planet Haskell policies regarding political and religious content. I rarely post political content, but am well within the guidelines in doing so. The stated mission of Planet Haskell is to "show what is happening in the community, what people are thinking about or doing". I am an active and well-known member of the Haskell community, and the violence endured by trans people is something I've been thinking a lot about lately. When Chung-chieh Shan gave the 2013 Haskell Symposium program chair report, he made a specific point of highlighting the effects of sexism, racism, homophobia, and transphobia in driving people out of the Haskell community. Therefore, I think it is fair to say that these issues are pertinent, above and beyond my personal involvement with them.

That said, I do not intend to discuss trans issues at length on this blog. Nevertheless, on occasion, these issues will come up because I refuse to live in silence and shame for who I am.

winterkoninkje: Shadowcrane (Default)
Things done today
  • Gave my advisors The Letter. The public announcement is scheduled for monday
  • Handed in the revised version of qual #2. Clocked in at 59 pages. I'm not entirely satisfied with it, but sometimes y'just gotta let it go.
What remains before I'm ABD
  • P550 final paper. Target length 10 pages.
  • Qual #3, due by x-mas. This one is just a lit-review, and I've already read the lit, so it shouldn't take too long (I hope!)
  • Qual oral defense, the first week or two of Spring. Cakewalk expected
  • Dissertation proposal. Aiming to get it done by January

TDOR

Nov. 20th, 2013 02:06 am
winterkoninkje: Shadowcrane (Default)

On this day we remember our dead.

When right-wing bigots lie and fabricate stories about trans* people, you look at our dead and tell me with a straight face who should fear whom. While you worry about your kids feeling nervous about nothing happening, I'm too worried for the children who will one day soon be shot, strangled, suffocated, stabbed, tortured, beheaded, lit on fire, and thrown off bridges simply for existing.

And you on the left: I love all you queers, and I'm glad for your victories; but the next time you celebrate an "LGBT" victory you take a long hard look at your history of throwing that "T" under the bus and you look at our dead and tell me with a straight face how it's not yet time to fight for trans* rights.

winterkoninkje: Shadowcrane (Default)

Heads up, the host I've been using for my website llama.freegeek.org will be going away on Sunday, 17 November 2013. Since I'm strapped for time, I won't be able to migrate it to another host for a while. Once I get a chance to set up a new one, I'll let y'all know.

winterkoninkje: Shadowcrane (Default)

Finally, hard evidence of what we've known all along: Everything the TSA has been doing is a waste of time and money, and violates our privacy for no security gain whatsoever.

quoted from the TSA's own statements: "As of mid-2011, terrorist threat groups present in the Homeland are not known to be actively plotting against civil aviation targets or airports; instead, their focus is on fundraising, recruiting, and propagandizing."
Elsewhere, in the redacted portions, the TSA is quoted as admitting that "there have been no attempted domestic hijackings of any kind in the 12 years since 9/11."
Amazingly, it appears that the government forced Corbett to redact the revelation that the TSA's own threat assessments have shown "literally zero evidence that anyone is plotting to blow up an airline leaving from a domestic airport."
winterkoninkje: Shadowcrane (Default)

Anyone who thinks sexism isn't such a big thing anymore, needs to read the following articles. Anyone who has been raised as male and thinks women's lives are essentially the same, needs to read the following articles. Anyone who wants to believe they aren't sexist or who wants to think of themselves as an "ally" to women, needs to read the following articles. Anyone who lives or works in academia, needs to read the following articles.

The terrible bargain we have regretfully struck
quoth @juliepagano: "If you are a man and have been confused about some of my anger and frustration recently, read the post."
Teaching Naked, Part 1
quoth @jenebbeler: "Incredibly thoughtful post about how a young female prof handled an inappropriate student comment"
Teaching Naked, Part 2
Followup to the first post, on how the administration responded to how she handled the sexual harassment.
winterkoninkje: Shadowcrane (Default)

It's been a few months since my last dieting update, and things've been going great.

Weight
In the first month I lost 13 pounds, but over the second month I didn't lose any. Apparently this sort of plateauing happens; so I've tweaked things a bit, hoping to get back on track. Last week I finally got a scale, so I can keep track of things more often than once a month. And if it's to be believed, I've lost an additional 5 pounds over the last couple weeks.
Cholesterol
Had another round of bloodwork done a fortnight ago. My cholesterol numbers are down by 90 points. I didn't get a copy of the results, so I can't remember which of the numbers that is exactly; but either way, it's a huge step from extremely high to borderline high.
Blood Sugar
My FDA is down from 6.2 (borderline pre-diabetic) to 5.3 (normal), which is awesome. Don't recall what the fasting random numbers were, but those are less reliable anyways.

And here's the graphic for my macronutrients over the last four weeks and the last three months. Looks like I've been averaging around 5::6, protein to carbs; which ain't bad, but it'd be nice if I can figure out a way to get it closer to 6::5 or 3::2.

winterkoninkje: Shadowcrane (Default)

It's been a while since I've posted under this tag, but I made a map of common normal modal logics based off the diagram in SEP. I don't have much to say about it, but I'm taking a course on the topic now so hopefully I'll have a bit more to say in the future.

While I was at it, I revised the map for binary relations (first published back here). Not too much has changed overall. I added the definition of "dense" (op-transitive) and "shift-reflexive" relations, since these show up in modal logics and so could be interesting elsewhere. I also tightened up the list of entailments to more explicitly capture the difference between the weak and strong definitions of (anti)symmetry. In the new versions, the entailments all hold in minimal logic assuming the substitution principle for equality (if x=y and P(x) then P(y), for any elements x and y and any property P)— except for those marked with a subscript I, which require intuitionistic logic. In addition, any of the entailments with strong (anti)symmetry as a premise can be strengthened to only require weak (anti)symmetry if we have decidable equality or are working in classical logic.

Edit 2013.10.12: Updated these two again. For the relations, just added a note that shift-reflexivity entails density. For the modal logics, added a bunch of new goodies and cleaned things up a bit.

wtf.

Oct. 1st, 2013 03:21 pm
winterkoninkje: Shadowcrane (Default)

So, I just found out that an old friend of mine:

I'm totally torn up about this; both about whatever happened, and about the ongoing reaction. Meanwhile, I can't help but recall the all too timely post of a different friend of mine about what it means to say "innocent until proven guilty".

winterkoninkje: Shadowcrane (Default)

The final API for (<κ)-commutative operators

Last time I talked about generalizing the notion of quasi-unordered pairs to the notion of quasi-unordered multisets. The final API from last time was:
type Confusion :: * → *

isSingleton :: Confusion a → Maybe a

size :: Confusion a → Cardinal

observe :: Ord r ⇒ (a → r) → Confusion a → [(r, Either (Confusion a) a)]

Now, every function of type Confusion a → b is guaranteed to be a (<κ)-commutative operator, where κ is implicitly given by the definition of Confusion. However, we have no way to construct the arguments to those functions! We need to add a function confuse :: ∀λ. 0<λ<κ ⇒ Vector a λ → Confusion a so that we can construct arguments for our (<κ)-commutative operators. Of course, rather than using bounded quantification and the Vector type, it'd be a lot easier to just define a type which incorporates this quantification directly:

data BoundedList (a::*) :: Nat → * where
    BLNil  :: BoundedList a n
    BLCons :: a → BoundedList a n → BoundedList a (1+n)

data NonEmptyBoundedList (a::*) :: Nat → * where
    NEBLSingleton :: a → NonEmptyBoundedList a 1
    NEBLCons      :: a → NonEmptyBoundedList a n → NonEmptyBoundedList a (1+n)
Now we have:
confuse :: NonEmptyBoundedList a κ → Confusion a

type Commutative a b = Confusion a → b

runCommutative :: Commutative a b → NonEmptyBoundedList a κ → b
runCommutative f xs = f (confuse xs)

Ideally, we'd like to take this a step further and have a version of runCommutative which returns a variadic function of type a → ... a → b for the appropriate number of arguments. This way we'd be able to call them like regular curried functions rather than needing to call them as uncurried functions. There are a number of ways to do variadic functions in Haskell, but discussing them would take us too far afield. Naturally, implementing them will amount to taking advantage of the 4-tuple for folding over multisets, which was defined last time.

Handling κ-commutative operators

Continuing the theme, suppose we really want to handle the case of κ-commutative operators rather than (<κ)-commutative operators. For simplicity, let's restrict ourselves to finite κ, and let's pretend that Haskell has full dependent types. If so, then we can use the following API:

type Confusion :: * → Nat → *

extractSingleton :: Confusion a 1 → a

size :: Confusion a n → Nat
size _ = n

data ConfusionList (r, a :: *) :: Nat → * where
    CLNil  :: ConfusionList r a 0
    CLCons :: r → Confusion a n → ConfusionList r a m → ConfusionList r a (n+m)

observe :: Ord r ⇒ (a → r) → Confusion a n → ConfusionList r a n

confuse :: Vector a (1+n) → Confusion a (1+n)

type Commutative a n b = Confusion a n → b

runCommutative :: Commutative a n b → Vector a n → b
runCommutative f xs = f (confuse xs)

winterkoninkje: Shadowcrane (Default)

Recently gelisam posted an article on a combinatorial approach to provably commutative binary operators, which has a decent discussion on reddit. Here, I'm going to generalize that idea.

Edit (2013-08-03T05:06:58Z): Okay, I should be done tweaking things now.

Edit (2013-08-03T06:02:40Z): See also the followup post.

Preliminary Definitions

Let A and B be arbitrary types, and let κ be any non-zero cardinal number (we only really care about finite cardinals, or maybe at most aleph-naught). Let the notation A^{κ} denote a multiset of elements drawn from A with cardinality exactly κ. And let the notation A^{<κ} denote a non-empty multiset of elements drawn from A with cardinality not greater than κ.

Commutative operators

A κ-commutative operator from A to B can be thought of as a function of type A^{κ} → B. For example:

  • The 1-commutative functions are all the functions of type A → B.
  • The 2-commutative functions are commutative binary functions, thus a subtype of A → A → B.
  • The 3-commutative functions are ternary functions which return the same result for all permutation orders of their arguments, thus a subtype of A → A → A → B.

A (<κ)-commutative operator from A to B can be thought of as a function of type A^{<κ} → B, i.e., a polymorphic function of type ∀λ. 0<λ<κ ⇒ A^{λ} → B. Thus, the (<κ)-commutative functions can be thought of as a family of λ-commutative functions, for all non-zero λ not greater than κ, satisfying suitable coherence conditions. However, for finite κ, a simpler representation is to think of them in terms of folds over non-empty multisets; that is, as 4-tuples:

(C, h:A→C, f:A→C→C, g:C→B)
    such that
    ∀a,a'∈A, c∈C. f a (f a' c) == f a' (f a c)
    ∀a,a'∈A.      f a (h a')   == f a' (h a)

Alternatively, if you are willing to include the case where λ=0, then use 4-tuples:

(C, c0:C, f:A→C→C, g:C→B)
    such that
    ∀a,a'∈A, c∈C. f a (f a' c) == f a' (f a c)

Quasi-unordering pairs

Gelisam's solution to the higher-order problem is to generalize unordered pairs to quasi-unordered pairs: which are, either an unordered pair or else some summary information about a pair which can't be unordered. The way we do this is to have an observation function we apply to each argument. If that observation function can distinguish the arguments, then we can pass the arguments to the function in a canonical order such that we do not betray the original order of the inputs. However, if we cannot distinguish the two arguments, then the best we can do is to return the observation we made on both of them— since, if we returned the actual arguments themselves then we'd betray their input order. Rather than hard-coding the choice of Bool as gelisam did, we can generalize to any totally ordered set of observations:

distinguish :: Ord r ⇒ (a → r) → Commutative a (Either r (a,a))
distinguish f = Commutative $ \ x y ↦
    let fx = f x
        fy = f y
    in
    case compare fx fy of
    LT ↦ Right (x,y)
    EQ ↦ Left fx
    GT ↦ Right (y,x)

Quasi-unordering multisets

Now, we want to generalize this to multisets with any non-zero cardinality. Note, because the observations r are totally ordered, the observation function induces an almost-linear partial order on the original domain a. Assume we have some suitable abstract implementation of multisets: Confusion a, which implements a^{<κ} for some particular κ. Because the observation function induces an order on our original domain, there exists a function:

observe_0 :: Ord r ⇒ (a → r) → Confusion a → [Confusion a]
    such that
    ∀ r f xs0. isOrdered (observe_0 f xs0)
        where
        isOrdered (xs:xss) =
            (∀x,x'∈xs. f x == f x')
            ⋀ case xss of
              []   ↦ True
              ys:_ ↦
                  (∀ x∈xs, y∈ys. f x < f y)
                  ⋀ isOrdered xss
That is, every element of the result is a multiset of elements which are equivalent under the observation function, and the elements of previous multisets precede the elements of subsequent multisets. That is, we can take the input multiset which is inherently unordered and return a nearly-linear DAG.

As written observe_0 doesn't get us that far, since the implementation of multisets is abstract, and all we've done is break one multiset up into a bunch of multisets. Now, assume our multisets support the function isSingleton :: Confusion a → Maybe a which tells us whether a multiset is a singleton or not, and gives us the element if it is. Using this, we can strengthen our observing function to:

observe_1 :: Ord r ⇒ (a → r) → Confusion a → [Either (Confusion a) a]
observe_1 f
    = map (\xs ↦ maybe (Left xs) Right (isSingleton xs))
    . observe_0 f

That gets us pretty much what we want. However, in order to gain the full functionality we had in the binary case, we'd like to have some way to eliminate the remaining multisets. Probably the most general way is to define the following function which also returns the shared observation for classes of inputs which are still confused:

observe_2 :: Ord r ⇒ (a → r) → Confusion a → [Either (r, Confusion a) a]
-- or, even better
observe_2' :: Ord r ⇒ (a → r) → Confusion a → [(r, Either (Confusion a) a)]
We could implement this in terms of observe_0, however, that means we'd be calling the observation function redundantly since observe_0 throws away the results. Thus, it's more efficient to just implement this version directly, in lieu of implementing observe_0 at all. The overall complexity will be the same and this version is strictly more powerful.

If we're returning the result of observing those confusion sets, then why bother returning the confusion set too? The reason is that this way we can apply our observing function again in order to further distinguish those confusion sets. Therefore, this version is complete for composition of observation functions whereas the version that discards the confusion set is not.

And, as a final note, we'd also want to ensure that our multisets support the operation size :: Confusion a → Cardinal; otherwise, we need to return a triple of the multiset, its size, and its observation. The binary case could avoid this detail since there all confusion sets must be of cardinality 2.

winterkoninkje: Shadowcrane (Default)

It's been three weeks since I got the bad news about cholesterol and blood-sugar levels. Three weeks since I've started this crazy diet. So, I figure it's time for an update on how things are going.

First off: I feel amazing! After just one day I felt more energetic than I have in a long time: I had a lot more pep like I'd upgraded to a more-powerful or smoother-running engine, but it felt like the gas tank was on empty. Makes sense, of course. The former feeling has continued, whereas the latter has gone away as I've gotten used to not relying on the quick boost that sugars give. Also, that feeling of getting winded after climbing a steep hill or those slight stomach cramps after a long hike? I haven't had the slightest glimmer of either since starting. Even after I'm done with the dieting per se, this is definitely going to change the way I eat from now on. The difference is just obscene.

One thing I learned, which apparently everyone else already knows, is that it's the protein what makes you feel full. For the first week, I was so full/unhungry that I had to be careful to keep my calories up. For someone of my stature, it's dangerous when you don't feel like eating more than 1000-or-so calories a day. Protein shakes helped a lot here. By the second week it was easier to get enough calories "naturally", and still easier during the third week. However, I'm still averaging 444 below my stated goal of 2247 (which would amount to losing around 2 pounds/week); which is better than the 514 below of the second week, but not so good as the 266 below of the first week. Even though 2247 is the stated goal according to my phone app, this last week I've been aiming more for 2000. Still, now that I've run the numbers, it looks like I should add the shakes back in. Lo-cal is good and all, but I don't want my body to trigger starvation mode. That'd suck.

So, I've been doing good on calories (as in the graphic above). However, getting the 1::1 balance between protein and carbs has been a lot harder (as in the graphic below). The upswing in carbs and downswing in fat on the right should be taken with a grain of salt. The graphics include today, but I've only entered my breakfast so far. Still, I have been allowing myself some more carbs the last two days, so I should be sure to keep that in check. I've got a solid breakfast recipe which isn't quite 1::1 but it's close, and a few nights back Licia made an amazing lasagne which was exactly 1::1, so I'll try to post those in a couple-few days.

winterkoninkje: Shadowcrane (Default)

data-fin 0.1.0

The data-fin package offers the family of totally ordered finite sets, implemented as newtypes of Integer, etc. Thus, you get all the joys of:

data Nat = Zero | Succ !Nat

data Fin :: Nat -> * where
    FZero :: (n::Nat) -> Fin (Succ n)
    FSucc :: (n::Nat) -> Fin n -> Fun (Succ n)

But with the efficiency of native types instead of unary encodings.

Notes

I wrote this package for a linear algebra system I've been working on, but it should also be useful for folks working on Agda, Idris, etc, who want something more efficient to compile down to in Haskell. The package is still highly experimental, and I welcome any and all feedback.

Note that we implement type-level numbers using [1] and [2], which works fairly well, but not as nicely as true dependent types since we can't express certain typeclass entailments. Once the constraint solver for type-level natural numbers becomes available, we'll switch over to using that.

[1] Oleg Kiselyov and Chung-chieh Shan. (2007) Lightweight static resources: Sexy types for embedded and systems programming. Proc. Trends in Functional Programming. New York, 2–4 April 2007.

[2] Oleg Kiselyov and Chung-chieh Shan. (2004) Implicit configurations: or, type classes reflect the values of types. Proc. ACM SIGPLAN 2004 workshop on Haskell. Snowbird, Utah, USA, 22 September 2004. pp.33–44.

Links

Profile

winterkoninkje: Shadowcrane (Default)
wren gayle romano

Syndicate

RSS Atom

March 2014

S M T W T F S
      1
2345678
9 101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags

Style Credit

Page generated Apr. 23rd, 2014 10:05 pm
Powered by Dreamwidth Studios