On Variablelength Integer Encoding
Suppose you want to represent data in a serialized form with the length prepended to the data. You can do something like what Pascal does with strings, and prefix it with an 8bit length. But that only gives you 0 through 255 bytes of data. So you need something larger, such as a 64bit value. But then a singlebyte data value takes up 9 bytes including the length indicator. We'd really want small data values to use a small amount of overhead to encode the length, and we'd want the large data values to be representable, too. And thus, we want a variablelength encoding of the length. And the length is an integer, so we want a variablelength encoding of an integer. We'll start with representing a nonnegative value, but a variablelength encoding of a signed integer may be worth a look too.
You can find some interesting articles on wikipedia about universal codes which are variablelength encodings of integers, but they focus on representations of integers in bitstreams. Given our usecase, we're really dealing with bytestreams.
So let's start with a simple idea: Count the number of leading 1 bits and call that N. The total size of the numeric representation is 2^{N} bytes. Take those bytes, mask off the N leading 1 bits, and interpret the number as a binary integer.
Let's try that:
0b00000000 = 0 0b00000001 = 1 : 0b01111111 = 127 0b10000000 00000000 = 0 0b10000000 00000001 = 1 : 0b10111111 11111111 = 16383
That gives us a way to represent any nonnegative integer value. But there is one undesirable characteristic of this approach: there are multiple correct ways to represent any given number. For instance, the number 0 can be represented in a single byte as 0b00000000 or in two bytes as 0b10000000 00000000. This introduces ambiguity when encoding the value 0. There may be situations where this is a desirable property, but in this case, I want there to be one and only one representation of each integer.
A simple solution is to make the representations not overlap by adding the number of valid shorter representations to the integer representation. That is, interpret the 2byte value as an integer, then add the number of valid 1byte values to it, And for the 4byte value, add the number of valid 2byte and 1byte values to it. An alternative way to state this is to add the largest number you can represent in the 2^{(N1)}byte representation (plus one) to the integer.
That gives us:
0b00000000 = 0 0b00000001 = 1 : 0b01111111 = 127 0b10000000 00000000 = 128 0b10000000 00000001 = 129 : 0b10111111 11111111 = 16511 0b11000000 00000000 00000000 00000000 = 16512 0b11000000 00000000 00000000 00000001 = 16513 : 0b11011111 11111111 11111111 11111111 = 536887423
Here is a simplistic Python implementation. One of the nice things about using Python is that it can natively handle huge integers, so only the serialization aspect is needed.
This approach can be generalized in a couple of ways.
The first is that this could be done using leading 0 bits instead of leading 1 bits. I prefer the leading 1 bits because the 1byte values 0127 are the same as your normal unsigned char. But whether it is defined as the number of leading 1bits or 0bits, it still gives us a way to determine the value of N.
The second is in the translation of N into a representation size in bytes. I chose 2^{N}, but it could just as easily be any function of N. If you wanted to have the size of the representation grow more slowly, you could use f(N) = N + 1. I like f(N) = 2^{N} in part because it gives 1byte, 2byte, 4byte, 8byte representations that fit well into the natural integer sizes on modern computers.
This can also be generalized to signed integers as long as you define a mapping from the set of nonnegative integers to the set of integers. A trivial solution would be to take the least significant bit to be a sign bit, though this gives you a way to represent negative zero. I suppose you could use that as a representation of NotaNumber (NaN) or something along those lines. Alternatively, use a two's complement representation, though care would have to be taken with signextending the value and adding to that the largest magnitude negative or positive value that would overflow the nextsmaller representation. This is left as an exercise to the reader.
Returning to our original problem statement, we now have a way to prepend a length to a data value while having the overhead cost low for small values while still supporting very large values. One byte of overhead to represent the length for data of 0 through 127 bytes is acceptable. Two bytes for 128 through 16511 bytes is also fine. By the time the overhead reaches 8 bytes, you're dealing with half a gigabyte of data.
But such a representation has additional possible uses. One that I have toyed with is using such a representation for a binary network communication protocol. Each message you define gets assigned an integer value, and you don't have to commit to a specific maximum number of message types when you define your protocol. Were I to use this for a protocol, I would want to make a 'version check' message have a numeric value < 128 so it fits in a single byte. And most messages would get a number that would map to a 2byte value. That way, as messages are determined to be bandwidth "hot spots", they can be moved to a <128 value to cut a byte off their representation. The other thing I would probably do with protocol numbers would be to define a different f(N) that would grow the size of the integer representation more slowly. For that matter, it would be possible to map f(0) > 1, f(1)>2, f(2)>2, f(3)>2, f(4)>3, etc; this would complicate some of the math, but would allow packing more values into 2 bytes. (The number of values represented by the second and third 2byte representations would be half or a quarter of what the first 2byte representation supported.) In a case like this, I would probably only define f(N) for the values of N I actually expect to use, and extend the definition as need arose.
Network protocols is another case where the unique nature of the representation is important. When you are dealing with systems that you want to secure (such as a network protocol), you do not want the ambiguity in the encoding process that a nonunique encoding implies. You want one and only one representation of each possible value so any attacker has no flexibility in doing something strange like using 256 bytes to represent the number 0.
I was prompted to post this by a question on programmers.stackexchange.com.
Attachments

varlenint.py
(12.7 KB) 
added by retracile 5 years ago.
Simple implementation of a variable length integer encoding
Comments
No comments.