RTEMS Statement Deepens Libogc License Controversy

Earlier this month we covered the brewing controversy over libogc, the community-developed C library that functions as the backbone for GameCube and Wii homebrew software. Questions about how much of the library was based on leaked information from Nintendo had been circulating for decades, but the more recent accusations that libogc included code from other open source projects without proper attribution brought the debate to a head — ultimately leading Wii Homebrew Channel developer Hector Martin to archive the popular project and use its README as a central point to collect evidence against libogc and its developers.

At the time, most of the claims had to do with code being taken from the Real-Time Executive for Multiprocessor Systems (RTEMS) project. Martin and others in the community had performed their own investigations, and found some striking similarities between the two codebases. A developer familiar with both projects went so far as to say that as much as half the code in libogc was actually lifted from RTEMS and obfuscated so as to appear as original work.

While some of these claims included compelling evidence, they were still nothing more than accusations. For their part, the libogc team denied any wrongdoing. Contributors to the project explained that any resemblance between libogc code and that of either leaked Nintendo libraries or other open source projects was merely superficial, and the unavoidable result of developing for a constrained system such as a game console.

But that all changed on May 6th, when the RTEMS team released an official statement on the subject. It turns out that they had been following the situation for some time, and had conducted their own audit of the libogc code. Their determination was that not only had RTEMS code been used without attribution, but that it appeared at least some code had also been copied verbatim from the Linux kernel — making the license dispute (and its solution) far more complex.

Continue reading “RTEMS Statement Deepens Libogc License Controversy”

Version Control To The Max

There was a time when version control was an exotic idea. Today, things like Git and a handful of other tools allow developers to easily rewind the clock or work on different versions of the same thing with very little effort. I’m here to encourage you not only to use version control but also to go even a step further, at least for important projects.

My First Job

The QDP-100 with — count ’em — two 8″ floppies (from an ad in Byte magazine)

I remember my first real job back in the early 1980s. We made a particular type of sensor that had a 6805 CPU onboard and, of course, had firmware. We did all the development on physically big CP/M machines with the improbable name of Quasar QDP-100s. No, not that Quasar. We’d generate a hex file, burn an EPROM, test, and eventually, the code would make it out in the field.

Of course, you always have to make changes. You might send a technician out with a tube full of EPROMs or, in an emergency, we’d buy the EPROMs space on a Greyhound bus. Nothing like today.

I was just getting started, and the guy who wrote the code for those sensors wasn’t much older than me. One day, we got a report that something was misbehaving out in the field. I asked him how we knew what version of the code was on the sensor. The blank look I got back worried me. Continue reading “Version Control To The Max”

Remembering Memory: EMS, And TSRs

You often hear that Bill Gates once proclaimed, “640 kB is enough for anyone,” but, apparently, that’s a myth — he never said it. On the other hand, early PCs did have that limit, and, at first, that limit was mostly theoretical.

After all, earlier computers often topped out at 64 kB or less, or — if you had some fancy bank switching — maybe 128 kB. It was hard to justify the cost, though. Before long, though, 640 kB became a limit, and the industry found workarounds. Mercifully, the need for these eventually evaporated, but for a number of years, they were a part of configuring and using a PC.

Why 640 kB?

The original IBM PC sported an Intel 8088 processor. This was essentially an 8086 16-bit processor with an 8-bit external data bus. This allowed for cheaper computers, but both chips had a strange memory addressing scheme and could access up to 1 MB of memory.

In fact, the 8088 instructions could only address 64 kB, very much like the old 8080 and Z80 computers. What made things different is that they included a number of 16-bit segment registers. This was almost like bank switching. The 1 MB space could be used 64 kB at a time on 16-byte boundaries.

So a full address was a 16-bit segment and a 16-bit offset. Segment 0x600D, offset 0xF00D would be written as 600D:F00D. Because each segment started 16-bytes after the previous one, 0000:0020, 0001:0010, and 0002:0000 were all the same memory location. Confused? Yeah, you aren’t the only one.

Continue reading “Remembering Memory: EMS, And TSRs”

Trackside Observations Of A Rail Power Enthusiast

The life of a Hackaday writer often involves hours spent at a computer searching for all the cool hacks you love, but its perks come in not being tied to an office, and in periodically traveling around our community’s spaces. This suits me perfectly, because as well as having an all-consuming interest in technology, I am a lifelong rail enthusiast. I am rarely without an Interrail pass, and for me Europe’s railways serve as both comfortable mobile office space and a relatively stress free way to cover distance compared to the hell of security theatre at the airport. Along the way I find myself looking at the infrastructure which passes my window, and I have become increasingly fascinated with the power systems behind electric railways. There are so many different voltage and distribution standards as you cross the continent, so just how are they all accommodated? This deserves a closer look.

So Many Different Ways To Power A Train

A British Rail Class 165 "Networker" train at a platform on Marylebone station, London.
Diesel trains like this one are for the dinosaurs.

In Europe where this is being written, the majority of main line railways run on electric power, as do many subsidiary routes. It’s not universal, for example my stomping ground in north Oxfordshire is still served by diesel trains, but in most cases if you take a long train journey it will be powered by electricity. This is a trend reflected in many other countries with large railway networks, except sadly for the United States, which has electrified only a small proportion of its huge network.

Of those many distribution standards there are two main groups when it comes to trackside, those with an overhead wire from which the train takes its power by a pantograph on its roof, or those with a third rail on which the train uses a sliding contact shoe. It’s more usual to see third rails in use on suburban and metro services, but if you take a trip to Southern England you’ll find third rail electric long distance express services. There are even four-rail systems such as the London Underground, where the fourth rail serves as an insulated return conductor to prevent electrolytic corrosion in the cast-iron tunnel linings. Continue reading “Trackside Observations Of A Rail Power Enthusiast”

Illustrated Kristina with an IBM Model M keyboard floating between her hands.

Keebin’ With Kristina: The One With The MingKwai Typewriter

Sometimes, a little goes a long way. I believe that’s the case with this tiny media control bar from [likeablob] that uses an ESP32-C3 Super Mini.

An in-line media control bar with four purple-capped key switch buttons and a knob.
Image by [likeablob] via Hackaday.IO
From left to right you’ve got a meta key that allows double functions for all the other keys. The base functions are play/pause, previous track, and next track while the knob handles volume.

And because it uses this Wi-Fi-enabled microcontroller, it can seamlessly integrate with Home Assistant via ESPHome.

What else is under the hood? Four low-profile Cherry MX Browns and a rotary encoder underneath that nicely-printed knob.

If you want to build one of these for yourself, all the files are available on GitHub including the customizable enclosure which [likeablob] designed with OpenSCAD. Continue reading “Keebin’ With Kristina: The One With The MingKwai Typewriter”

Radio Apocalypse: Meteor Burst Communications

The world’s militaries have always been at the forefront of communications technology. From trumpets and drums to signal flags and semaphores, anything that allows a military commander to relay orders to troops in the field quickly or call for reinforcements was quickly seized upon and optimized. So once radio was invented, it’s little wonder how quickly military commanders capitalized on it for field communications.

Radiotelegraph systems began showing up as early as the First World War, but World War II was the first real radio war, with every belligerent taking full advantage of the latest radio technology. Chief among these developments was the ability of signals in the high-frequency (HF) bands to reflect off the ionosphere and propagate around the world, an important capability when prosecuting a global war.

But not long after, in the less kinetic but equally dangerous Cold War period, military planners began to see the need to move more information around than HF radio could support while still being able to do it over the horizon. What they needed was the higher bandwidth of the higher frequencies, but to somehow bend the signals around the curvature of the Earth. What they came up with was a fascinating application of practical physics: meteor burst communications.

Continue reading “Radio Apocalypse: Meteor Burst Communications”

Hackaday Links Column Banner

Hackaday Links: May 11, 2025

Did artificial intelligence just jump the shark? Maybe so, and it came from the legal world of all places, with this report of an AI-generated victim impact statement. In an apparent first, the family of an Arizona man killed in a road rage incident in 2021 used AI to bring the victim back to life to testify during the sentencing phase of his killer’s trial. The video was created by the sister and brother-in-law of the 37-year-old victim using old photos and videos, and was quite well done, despite the normal uncanny valley stuff around lip-syncing that seems to be the fatal flaw for every deep-fake video we’ve seen so far. The victim’s beard is also strangely immobile, which we found off-putting.

Continue reading “Hackaday Links: May 11, 2025”