winterkoninkje: shadowcrane (clean) (Default)

I offer you a thought experiment. For your current project you've set up some inter-process communication. Nothing tricky involved, just your standard client–server deal. You've even outsourced the protocol design and are using Google's code to handle all the grody details of serializing and deserializing. Well, okay, on the server side you're using someone else's code but it implements the same protocol, right? Now you run into a bug.

The vast majority of the time everything works smoothly— even verified by taking SHA1 hashes of the messages on both sides and comparing them. But every so often the Java client crashes. In particular, it crashes whenever reading a result message (from the server) of length 255 or 383 and maybe some larger sizes. It does, however, work perfectly fine for intervening message lengths (including 254 and 256). So what's wrong?

Knowing the answer, you'd predict difficulties with length 511 as well, though you haven't observed it to fail (or succeed) in practice.

We're using the "delimited" version of protocol buffers, the one that writes the message/payload length just before the payload.

It's a one-line bugfix.

Solution posted here.

Date: 2011-03-28 05:02 am (UTC)From: [personal profile] lindseykuper
lindseykuper: A figure, wearing a pink shirt decorated with a heart, looks upward from between dark shapes that suggest buildings. (Default)
Well, I don't know the answer, but thanks for giving me an excuse to read about how protocol buffers are encoded.
heh, that's a cute one. hazard a guess as to wrong parsing with repeated 1s in the binary representation? signed representation?

June 2017

18192021 222324


Page generated 21 Oct 2017 05:37 pm
Powered by Dreamwidth Studios