Tracking Binary Changes: Learn The DIFF-erent Ways Of The ELF

Source control is often the first step when starting a new project (or it should be, we’d hope!). Breaking changes down into smaller chunks and managing the changes between them makes it easier to share work between developers and to catch and revert mistakes after they happen. As project complexity increases it’s often desirable to add other nice to have features on top of it like automatic build, test, and deployment.

These are less common for firmware but automatic builds (“Continuous Integration” or CI) is repetitively easy to setup and instantly gives you an eye on a range of potential problems. Forget to check in that new header? Source won’t build. Tweaked the linker script and broke something? Software won’t build. Renamed a variable but forgot a few references? Software won’t build. But just building the software is only the beginning. [noseglasses] put together a tool called elf_diff to make tracking binary changes easier, and it’s a nifty addition to any build pipeline.

In firmware-land, where flash space can be limited, it’s nice to keep a handle on code size. This can be done a number of ways. Manual inspection of .map files (colloquially “mapfiles”) is the easiest place to start but not conducive to automatic tracking over time. Mapfiles are generated by the linker and track the compiled sizes of object files generated during build, as well as the flash and RAM layouts of the final output files. Here’s an example generated by GCC from a small electronic badge. This is a relatively simple single purpose device, and the file is already about 4000 lines long. Want to figure out how much codespace a function takes up? That’s in there but you’re going to need to dig for it.

elf_diff automates that process by wrapping it up in a handy report which can be generated automatically as part of a CI pipeline. Fundamentally the tool takes as inputs an old and a new ELF file and generates HTML or PDF reports like this one that include readouts like the image shown here. The resulting table highlights a few classes of binary changes. The most prominent is size change for the code and RAM sections, but it also breaks down code size changes in individual symbols (think structures and functions). [noseglasses] has a companion script to make the CI process easier by compiling a pair of firmware files and running elf_diff over them to generate reports. This might be a useful starting point for your own build system integration.

Thanks [obra] for the tip! Have any tips and tricks for applying modern software practices to firmware development? Tell us in the comments!

Tucoplexing: A New Charliplex For Buttons And Switches

Figuring out the maximum number of peripherals which can be sensed or controlled with a minimum number of IOs is a classic optimization trap with a lot of viable solutions. The easiest might be something like an i2c IO expander, which would give you N outputs for 4 wires (SDA, SCL, Power, Ground). IO expanders are easy to interface with and not too expensive, but that ruins the fun. This is Hackaday, not optimal-cost-saving-engineer-aday! Accordingly there are myriad schemes for using high impedance modes, the directionality of diodes, analog RCs, and more to accomplish the same thing with maximum cleverness and minimum part cost. Tucoplexing is the newest variant we’ve seen, proven out by the the prolific [Micah Elizabeth Scott] (AKA [scanlime]) and not the first thing to be named after her cat Tuco.

[Micah’s] original problem was that she had a great 4 port USB switch with a crummy one button interface. Forget replacement; the hacker’s solution was to reverse and reprogram the micro to build a new interface that was easier to relocate on the workbench. Given limited IO the Tucoplex delivers 4 individually controllable LEDs and 4 buttons by mixing together a couple different concepts in a new way.

Up top we have 4 LEDs from a standard 3 wire Charlieplex setup. Instead of the remaining 2 LEDs from the 3 wire ‘plex at the bottom we have a two button Charlieplex pair plus two bonus buttons on an RC circuit. Given the scary analog circuit the scan method is pleasingly simple. By driving the R and T lines quickly the micro can check if there is a short, indicating a pressed switch. Once that’s established it can run the same scan again, this time pausing to let the cap charge before sensing. After releasing the line if there is no charge then the cap must have been shorted, meaning that switch was pressed. Else it must be the other non-cap switch. Check out the repo for hardware and firmware sources.

Last time we talked about a similar topic a bunch of readers jumped in to tell us about their favorite ways to add more devices to limited IOs. If you have more clever solutions to this problem, leave them below! If you want to see the Twitter thread with older schematics and naming of Tucoplexing look after the break.

Continue reading “Tucoplexing: A New Charliplex For Buttons And Switches”

Faster Computers Lead To Slower Experiences?

Ever get that funny feeling that things aren’t quite what they used to be? Not in the way that a new washing machine has more plastic parts than one 40 years its senior. More like “my laptop can churn through hundreds of gigaflops, but when I scroll it doesn’t feel great.” That perception of smoothness might be based on a couple factors, including system latency. A couple years ago [danluu] had that feeling too and measured the latency of “devices I’ve run into in the past few months” (based on this list, he lives a more interesting life than we do). It turns out his hunch was objectively correct. What he wrote was a wonderful deep dive into how and why a wide variety of devices work and the hardware and software contributors to latency.

Let’s be clear about what “latency” means in this context. [danluu] was checking the time between a user input and some response on screen. For desktop systems he measured a keystroke, for mobile devices scrolling a browser. If you’re here on Hackaday (or maybe at a Vintage Computer Festival) the cause of the apparent contradiction at the top of the charts might be obvious.

Q: Why are some older systems faster than devices built decades later? A: The older systems just didn’t do much! Instead of complex multi-tasking operating systems doing hundreds of things at once, the CPU’s entire attention was bent on whatever user process was running. There are obvious practical drawbacks here but it certainly reduces context switching!

In some sense this complexity that [danluu] describes is at the core of how we solve problems with programming. Writing code is all about abstraction. While it’s true that any program could be written directly in machine code and customized to an individual machine’s hardware configuration, it would be pretty inconvenient for both developer and user. So over time layers of sugar have been added on top to hide raw hardware behind nicer interfaces written in higher-level programming languages.

And instead of writing every program to target exact hardware configurations there is a kernel to handle the lowest layers, then layers adding hotplug systems, power management, pluggable module and driver infrastructure, and more. When considering solutions to a programming problem the approach is always recursive: you can solve the problem, or add a layer of abstraction and reframe it. Enough layers of the latter makes the former trivial. But it’s abstractions all the way down.

[danluu]’s observation is that we’re just now starting to curve back around and hit low latency again, but this time by brute force! Modern solutions to latency largely look like increasingly exotic display technologies and complex optimizations which reach from UI draw functions all the way down to the silicon, not removing software and system infrastructure. It turns out the benefits of software complexity in terms of user experience and ease of development are worth it most of the time.

For a very tangible illustration of latency as applied to touchscreen devices, check out the Microsoft Research video after the break (linked to in [danluu]’s piece).

Continue reading “Faster Computers Lead To Slower Experiences?”

KiCon Gets Our KiCad Conference On

Oh, what’s KiCon you say? KiCon is the first dedicated conference on our favorite libre EDA tool: KiCad, organized by friend of Hackaday Chris Gammell and scheduled for April 26 and 27th in Chicago.

Having stuffed ourselves full of treats through the holidays, followed by sleeping through the calm winter months, we find ourselves once again facing the overwhelming tsunami of conference season. This year things are heating up early, and you’ll find a lot of Hackaday staff are headed to Chicago for KiCon.

Now that early selection of talks has been released, the end of April can’t come soon enough. Being user focused the conference is centered around what people make using the tool, and how it can be leveraged to improve your next project. Wayne Stambaugh, the project lead for KiCad itself, will be on hand to talk about the state of the tool and what the road map looks like from here. There will be a pair of talks on effective version control and applying the practice of continuous integration and deployment to the EDA world. We’ll hear about methods for working with distributed project members and tips for designing easy to learn beginner soldering kits. And there will be two talks on RF and microwave design, one of which we hope will teach us how to use that mysterious toolbar with the squiggly lines.

For an extra dash of flavor there will be a few Hackaday staff participating in the festivities. is making the flight over to present a talk about how to quickly generate and use 3D models in FreeCAD, something we’re very interested in applying to our messy part libraries. Kerry Scharfglass will be around to walk through how to lay out a manufacturing line and design the test tools that sit on it. And our illustrious Editor in Chief Mike Szczys will be roaming the halls in search of excellent hacks to explore and brains to pick.

Interested in attending or volunteering for the conference? Now is the time to buy your tickets and/or apply as a volunteer!

Of course there’s a ton of fun and games that surround KiCon. Hackaday will be hosting another edition of our always exciting bring-a-hack the evening of Saturday April 27th after official activities wrap up. Plan to stop by and enjoy a beverage at this gathering of like minded hackers who are showing off awesome toys. We’ll get more location details out soon, but for now, grab a ticket to the con and make your travel arrangements.

The Woeful World Of Worldwide E-Waste

How large is the cache of discarded electronics in your home? They were once expensive and cherished items, but now they’re a question-mark for responsible disposal. I’m going to dig into this problem — which goes far beyond your collection of dead smartphones — as well as the issues of where this stuff ends up versus where it should end up. I’m even going to demystify the WEEE mark (that crossed out trashcan icon you’ve been noticing on your gadgets), talk about how much jumbo jets weigh, and touch on circular economies, in the pursuit of better understanding of the waste streams modern gadgets generate.

Our lives are encountering an increasing number of “how do I dispose of this [X]” moments, where X is piles of old batteries, LCDs, desktop towers, etc. This leads to relationship-testing piles of garbage potential in a garage or the bottom of a closet. Sometimes that old gear gets sold or donated. Sometimes there’s a handy e-waste campaign that swings through the neighborhood to scoop that pile up, and sometimes it eventually ends up in the trash wrapped in that dirty feeling that we did something wrong. We’ve all been there; it’s easy to discover that responsible disposal of our old electronics can be hard.

Fun fact: the average person who lives in the US generates 20 kg of e-waste annually (or about 44 freedom pounds). That’s not unique, in the UK it’s about 23 kg (that’s 23 in common kilograms), 24 kg for Denmark, and on and on. That’s quite a lot for an individual human, right? What makes up that much waste for one person? For that matter, what sorts of waste is tracked in the bogus sounding e-waste statistics you see bleated out in pleading Facebook posts? Unsurprisingly there are some common definitions. And the Very Serious People people at the World Economic Forum who bring you the definitions have some solutions to consider too.

We spend a lot of time figuring out how to build this stuff. Are we spending enough time planning for what to do with the gear once it falls out of favor? Let’s get to the bottom of this rubbish.
Continue reading “The Woeful World Of Worldwide E-Waste”

Putting An Out Of Work IPod Display To Good Use

[Mike Harrison] produces so much quality content that sometimes excellent material slips through the editorial cracks. This time we noticed that one such lost gem was [Mike]’s reverse engineering of the 6th generation iPod Nano display from 2013, as caught when the also prolific [Greg Davill] used one on a recent board. Despite the march of progress in mobile device displays, small screens which are easy to connect to hobbyist style devices are still typically fairly low quality. It’s easy to find fancier displays as salvage but interfacing with them electrically can be brutal, never mind the reverse engineering required to figure out what signal goes where. Suffice to say you probably won’t find a manufacturer data sheet, and it won’t conveniently speak SPI or I2C.

After a few generations of strange form factor exploration Apple has all but abandoned the stand-alone portable media player market; witness the sole surviving member of that once mighty species, the woefully outdated iPod Touch. Luckily thanks to vibrant sales, replacement parts for the little square sixth generation Nano are still inexpensive and easily available. If only there was a convenient interface this would be a great source of comparatively very high quality displays. Enter [Mike].

Outer edge of FPGA and circuit

This particular display speaks a protocol called DSI over a low voltage differential MIPI interface, which is a common combination which is still used to drive big, rich, modern displays. The specifications are somewhat available…if you’re an employee of a company who is a member of the working group that standardizes them — there are membership discounts for companies with yearly revenue below $250 million, and dues are thousands of dollars a quarter.

Fortunately for us, after some experiments [Mike] figured out enough of the command set and signaling to generate easily reproduced schematics and references for the data packets, checksums, etc. The project page has a smattering of information, but the circuit includes some unusual provisions to adjust signal levels and other goodies so try watching the videos for a great explanation of what’s going on and why. At the time [Mike] was using an FPGA to drive the display and that’s certainly only gotten cheaper and easier, but we suspect that his suggestion about using a fast micro and clever tricks would work well too.

It turns out we made incidental mention of this display when covering [Mike]’s tiny thermal imager but it hasn’t turned up much since them. As always, thanks for the accidental tip [Greg]! We’re waiting to see the final result of your experiments with this.

Hack Your File Hierarchy With Johnny Decimal System (Dewey’s Older Brother)

Most of us have our fair share of digital debris. After all, with drives measured in one-million-million byte increments it’s tempting to never delete anything. The downside is you may never be able to find anything either. [Johnny Noble] must have gotten pretty fed up with clutter when he decided to formalize and publish his own numeric system for organizing everything he comes in contact with. It’s called Johnny Decimal and it’s actually pretty simple!

This is of course a play on words for the Dewey Decimal system. Dewey is one of a variety of information organization systems used by libraries to sort the books on their shelves. It’s based on moving books into sets of fixed, predefined categories which are uniform across all users of the Dewey. To locate a volume the user composes categories of increasing specificity to build a number which specifies the approximate space a particular book should live in. Each individual volume has a slightly more verbose assigned number which includes the author’s name to reduce confusion in cases where there are multiple works. Wikipedia has an instructive example which you can see here.

Johnny Decimal

Johnny Decimal works similarly but [Johnny] has a specific method he’s devised for the user to create their own categories with somewhat less specificity than Dewey. This makes it less onerous for the user to adapt to their needs, and if it’s easier to use it’s more likely to be used. I won’t spoil the process here, go read his site for instructions.

Ok so why bother? [Johnny] hints at it, but part of the point is to force the user to think about organization in the first place. With no system and an endless torrent of incoming files it’s easy to end up with the giant “~/Downloads” of doom and never improve from there. But with a clearly defined system (which is easy to execute!) the bar to improve things gets much lower. Certainly the thought of a well-organized file system gives us the shivers!

If you’re interested in implementing it in your own systems, the Johnny Decimal site has many pages devoted to explaining how to put together areas and categories, how to handle running out of buckets, the process for developing your own system, and more. If you try it and have luck, send us a note! We’d love to hear about anything you discover. If you’ll excuse us, we’re off to go fix up our parts bins with a marker and some sticky notes.

Continue reading “Hack Your File Hierarchy With Johnny Decimal System (Dewey’s Older Brother)”