Accelerate Your Large Builds Locally With Distcc

The motto of Sun Microsystems back in the day was “The Network Is The Computer” which might be kind of relevant when CPUs were slower and single-core affairs, but lately to get a faster compile, you’d simply throw more cores and memory at the problem. The thing is, most of us don’t do huge compilations all that often, we can’t remember the last time we even attempted a Linux kernel build. However if you do find yourself with a sudden need to do so, and have access to a pile of machines hooked to a network, then why not check out distcc: the fast distributed C/C++ Compiler? We’ve seen a few mentions in comments and a HaD links article referencing it, but never explicitly covered the tool. So here we go.

Continue reading “Accelerate Your Large Builds Locally With Distcc”

Linux Fu: Customizing Printf

When it comes to programming in C and, sometimes, C++, the printf function is a jack-of-all-trades. It does a nice job of quickly writing output, but it can also do surprisingly intricate formatting. For debugging, it is a quick way to dump some data. But what if you have data that printf can’t format? Sure, you can just write a function to pick things apart into things printf knows about. But if you are using the GNU C library, you can also extend printf to use custom specifications. It isn’t that hard, and it makes using custom data types easier.

An Example

Suppose you are writing a program that studies coin flips. Even numbers are considered tails, and odd numbers are heads. Of course, you could just print out the number or even mask off the least significant bit and print that. But what fun is that?

Here’s a very simple example of using our new printf specifier “%H”:

printf("%H %H %H %H\n",1,2,3,4);
printf("%1H %1H\n",0,1);

When you have a width specification of 1 (like you do in the second line) the output will be H or T. If you have anything else, the output will be HEADS or TAILS.

Continue reading “Linux Fu: Customizing Printf”

Porting DOOM To A Forgotten Apple OS

Apple hasn’t always had refined user experiences in their operating systems. In the distant past of the ’90s they were still kind of clunky computers that were far from the polished, high-end consumer machines of the modern era. That wasn’t all that Apple offered back then, though. They had a long-forgotten alternative operating system that was called A/UX designed for government applications, and [Keriad] is here to show us this relic operating system and port DOOM to it.

A/UX was designed in the pre-PowerPC days when Macintosh computers ran on Motorola 68000 chips. Luckily, [Keriad] has a Mac Quadra 800 with just such a chip that is still fully-functional. DOOM was developed with the NeXTSTEP operating system which can run on old Macs thanks to another tool called MacX which allows X11 applications to run on Mac. A version of gcc for A/UX was found as well and with the source code in hand they were eventually able to compile a binary. There were several hiccups along the way (including the lack of sound) but eventually DOOM was running on this forgotten operating system.

The main problem with the build in the end, besides the lack of sound, is that the game only runs at 2 – 3 frames per second. [Keriad] speculates that this is due to all of the compatibility layers needed to compile and run the game at all, but it’s still impressive. As far as we know, [Keriad] is the first person to port DOOM to this OS, although if you’re looking for something more straightforward we would recommend this purpose-built Linux distribution whose sole task is to get you slaying demons as quickly as possible.

Making The Case For COBOL

Perhaps rather unexpectedly, on the 14th of March this year the GCC mailing list received an announcement regarding the release of the first ever COBOL front-end for the GCC compiler. For the uninitiated, COBOL saw its first release in 1959, making it with 63 years one of the oldest programming language that is still in regular use. The reason for its persistence is mostly due to its focus from the beginning as a transaction-oriented, domain specific language (DSL).

Its acronym stands for Common Business-Oriented Language, which clearly references the domain it targets. Even with the current COBOL 2014 standard, it is still essentially the same primarily transaction-oriented language, while adding support for structured, procedural and object-oriented programming styles. Deriving most of its core from Admiral Grace Hopper‘s FLOW-MATIC  language, it allows for efficiently describing business logic as one would encounter at financial institutions or businesses, in clear English.

Unlike the older GnuCOBOL project – which translates COBOL to C – the new GCC-COBOL front-end project does away with  that intermediate step, and directly compiles COBOL source code into binary code. All of which may raise the question of why an entire man-year was invested in this effort for a language which has been declared ‘dead’ for  probably at least half its 63-year existence.

Does it make sense to learn or even use COBOL today? Do we need a new COBOL compiler?

Continue reading “Making The Case For COBOL”

Even More Firmware In Your Firmware

There are many ways to update an embedded system in the field. Images can fly through the air one a time, travel by sneaker or hitch a ride on other passing data. OK, maybe that’s a stretch, but there are certainly a plethora of ways to get those sweet update bytes into a target system. How are those bytes assembled, and what are the tools that do the assembly? This is the problem I needed to solve.

Recall, my system wasn’t a particularly novel one (see the block diagram below). Just a few computers asking each other for an update over some serial busses. I had chosen to bundle the payload firmware images into the binary for the intermediate microcontroller which was to carry out the update process. The additional constraint was that the blending of the three firmware images (one carrier and two payload) needed to happen long after compile time, on a different system with a separate toolchain. There were ultimately two options that fit the bill.

The system thirsty for an update

Continue reading “Even More Firmware In Your Firmware”

Putting The Firmware In Your Firmware

Performing over-the-air updates of devices in the field can be a tricky business. Reliability and recovery is of course key, but even getting the right bits to the right storage sectors can be a challenge. Recently I’ve been working on a project which called for the design of a new pathway to update some small microcontrollers which were decidedly inconvenient.

There are many pieces to a project like this; a bootloader to perform the actual updating, a robust communication protocol, recovery pathways, a file transfer mechanism, and more. What made these micros particularly inconvenient was that they weren’t network-connected themselves, but required a hop through another intermediate controller, which itself was also not connected to the network. Predictably, the otherwise simple “file transfer” step quickly ballooned out into a complex onion of tasks to complete before the rest of the project could continue. As they say, it’s micros all the way down.

The system de jour

Continue reading “Putting The Firmware In Your Firmware”

Compiler Explorer, Explored

It wasn’t long ago that we introduced you to a web site, the Godbolt compiler explorer, that allows the visitor to compile code using a slew of compilers and compare their output. We suspect some number of readers said, “Wow! I can use that!”, while perhaps everyone else said, “Huh?” Well if you were in the second group, you ought to watch [What’s a Creel’s] video below where he walks through using the website. He looks at four different algorithms using four different compilers and it is a good example of how you might use the tool to make decisions about how you write software.

Continue reading “Compiler Explorer, Explored”