Code Wrong: Expand Your Mind

The really nice thing about doing something the “wrong” way is that there’s just so much variety! If you’re doing something the right way, the fastest way, or the optimal way, well, there’s just one way. But if you’re going to do it wrong, you’ve got a lot more design room.

Case in point: esoteric programming languages. The variety is stunning. There are languages intended to be unreadable, or to sound like Shakespearean sonnets, or cooking recipes, or hair-rock ballads. Some of the earliest esoteric languages were just jokes: compilations of all of the hassles of “real” programming languages of the time, but yet made to function. Some represent instructions as a grid of colored pixels. Some represent the code in a fashion that’s tantamount to encryption, and the only way to program them is by brute forcing the code space. Others, including the notorious Brainf*ck are actually not half as bad as their rap — it’s a very direct implementation of a Turing machine.

So you have a set of languages that are designed to be maximally unlike each other, or traditional programming languages, and yet still be able to do the work of instructing a computer to do what you want. And if you squint your eyes just right, and look at as many of them all together as you can, what emerges out of this blobby intersection of oddball languages is the essence of computing. Each language tries to be as wrong as possible, so what they have in common can only be the unavoidable core of coding.

While it might be interesting to compare an contrast Java and C++, or Python, nearly every serious programming language has so much in common that it’s just not as instructive. They are all doing it mostly right, and that means that they’re mostly about the human factors. Yawn. To really figure out what’s fundamental to computing, you have to get it wrong.

Character Generation In 144 Bytes

[Jaromir Sukuba]  has an awesome BrainF*ck interpreter project going. He’s handling the entire language in less than 1 kB of code. Sounds like a great entry in the 1 kB Challenge. The only problem is the user interface. The original design used a 4 line character based LCD. The HD44780 controller in these LCDs have their own character table ROM, which takes up more than 1 kB of space alone.

[Jaromir] could have submitted the BrainF*ck interpreter without the LCD, and probably would have done well in the contest. That wasn’t quite enough for him though. He knew he could get character based output going within the rules of the contest. The solution was a bit of creative compression.

bf-cs-3Rather than a pixel-by-pixel representation of the characters, [Jaromir] created a palette of 16 single byte vectors of commonly used patterns. Characters are created by combining these vectors. Each character is 4 x 8 pixels, so 4 vectors are used per character. The hard part was picking commonly used bit patterns for the vectors.

The first iteration was quite promising – the text was generally readable, but a few characters were pretty bad. [Jaromir] kept at it, reducing and optimizing his vector pallet twice more. The final design is pretty darn good. Each character uses 16 bits of storage (four 4-bit vector lookup values). The vector pallet itself uses 16 bytes. That means 64 characters only eat up 144 Bytes of flash.
This is exactly the kind of hack we were hoping to see in the 1 kB challenge. A bit of creative thinking finds a way around a seemingly impossible barrier. The best part of all is that [Jaromir] has documented his work, so now anyone can use it in the 1 kB challenge and beyond.

1kb-thumb

 

If you have a cool project in mind, there is still plenty of time to enter the 1 kB Challenge! Deadline is January 5, so check it out and fire up your assemblers!