Jenny’s (Not Quite) Daily Drivers: Raspberry Pi 1

An occasional series of mine on these pages has been Daily Drivers, in which I try out operating systems from the point of view of using them for my everyday Hackaday work. It has mostly featured esoteric or lesser-used systems, some of which have been unexpected gems and others have been not quite ready for the big time.

Today I’m testing another system, but it’s not quite the same as the previous ones. Instead I’m looking at a piece of hardware, and I’m looking at it for use in my computing projects rather than as my desktop OS. You’ll all be familiar with it: the original Raspberry Pi appeared at the end of February 2012, though it would be May of that year before all but a lucky few received one. Since then it has become a global phenomenon and spawned a host of ever-faster successors, but what of that original board from 2012 here in 2025? If you have a working piece of hardware it makes sense to use it, so how does the original stack up? I have a project that needs a Linux machine, so I’m dusting off a Model B and going down memory lane.

Continue reading “Jenny’s (Not Quite) Daily Drivers: Raspberry Pi 1”

Turning Old Cellphones Into SBCs

[David] sent us a tip about a company in Belgium, Citronics, that is looking to turn old cellphones into single-board computers for embedded Linux applications. We think it’s a great idea, and have long lamented how many pocket supercomputers simply get tossed in the recycling stream, when they could be put to use in hacker projects. So far, it looks like Citronics only has a prototyping breakout board for the Fairphone 2, but it’s a promising idea.

One of the things that’s stopping us from re-using old phones, of course, is the lack of easy access to the peripherals. On the average phone, you’ve got one USB port and that’s it. The Citronics dev kit provides all sorts of connectivity: 4x USB 2.0, 1x Ethernet 10/100M, and a Raspberry Pi Header (UART, SPI, I2C, GPIO). At the same time, for better or worse, they’ve done away with the screen and its touch interface, and the camera too, but they seem to be keeping all of the RF capabilities.

The whole thing runs Linux, which means that this won’t work with every phone out there, but projects like PostmarketOS and others will certainly broaden the range of usable devices. And stripping off the camera and screen has the secondary advantages of removing the parts that get most easily broken and have the least support from custom Linux distros.

We wish we had more details about the specifics of the break-out boards, but we like the idea. How long before we see an open-source implementation of something similar? There are so many cheap used and broken cellphones out there that it’s certainly a worthwhile project!

Multifunctional USB controlled PCB on blue background

How A Tiny Relay Became A USB Swiss Army Knife

Meet the little board that could: [alcor6502]’s tiny USB relay controller, now evolved into a multifunction marvel. Originally built as a simple USB relay to probe the boundaries of JLCPCB’s production chops, it has become a compact utility belt for any hacker’s desk drawer. Not only has [alcor6502] actually built the thing, he even provided instructions. If you happened to be at Hackaday in Berlin, you now might even own one, as he handed out twenty of them during his visit. If not, read on and build it yourself.

This thing is not just a relay, and that is what makes it special. Depending on a few solder bridges and minimal components, it shape-shifts into six different tools: a fan controller (both 3- and 4-pin!), servo driver, UART interface, and of course, the classic relay. It even swaps out a crystal oscillator for USB self-sync using STM32F042‘s internal RC – no quartz, less cost, same precision. A dual-purpose BOOT0 button lets you flash firmware and toggle outputs, depending on timing. Clever reuse, just like our mothers taught us.

It’s the kind of design that makes you want to tinker again. Fewer parts. More function. And that little smile when it just works. If this kind of clever compactness excites you too, read [alcor6502]’s build log and instructions here.

Fitting A Spell Checker Into 64 KB

By some estimates, the English language contains over a million unique words. This is perhaps overly generous, but even conservative estimates generally put the number at over a hundred thousand. Regardless of where the exact number falls between those two extremes, it’s certainly many more words than could fit in the 64 kB of memory allocated to the spell checking program on some of the first Unix machines. This article by [Abhinav Upadhyay] takes a deep dive on how the early Unix engineers accomplished the feat despite the extreme limitations of the computers they were working with.

Perhaps the most obvious way to build a spell checker is by simply looking up each word in a dictionary. With modern hardware this wouldn’t be too hard, but disks in the ’70s were extremely slow and expensive. To move the dictionary into memory it was first whittled down to around 25,000 words by various methods, including using an algorithm to remove all affixes, and then using a Bloom filter to perform the lookups. The team found that this wasn’t a big enough dictionary size, and had to change strategies to expand the number of words the spell checker could check. Hash compression was used at first, followed by hash differences and then a special compression method which achieved an almost theoretically perfect compression.

Although most computers that run spell checkers today have much more memory as well as disks which are orders of magnitude larger and faster, a lot of the innovation made by this early Unix team is still relevant for showing how various compression algorithms can be used on data in general. Large language models, for one example, are proving to be the new frontier for text-based data compression.

Brazilian Modders Upgrade NVidia Geforce GTX 970 To 8 GB Of VRAM

Although NVidia’s current disastrous RTX 50-series is getting all the attention right now, this wasn’t the first misstep by NVidia. Back in 2014 when NVidia released the GTX 970 users were quickly dismayed to find that their ‘4 GB VRAM’ GPU had actually just 3.5 GB, with the remaining 512 MB being used in a much slower way at just 1/7th of the normal speed. Back then NVidia was subject to a $30/card settlement with disgruntled customers, but there’s a way to at least partially fix these GPUs, as demonstrated by a group of Brazilian modders (original video with horrid English auto-dub).

The mod itself is quite straightforward, with the original 512 MB, 7 Gbps GDDR5 memory modules replaced with 1 GB, 8 Gbps chips and adding a resistor on the PCB to make the GPU recognize the higher density VRAM ICs. Although this doesn’t fix the fundamental split VRAM issue of the ASIC, it does give it access to 7 GB of faster, higher-density VRAM. In benchmarks performance was massively increased, with Unigine Superposition showing nearly a doubling in the score.

In addition to giving this GTX 970 a new lease on life, it also shows just how important having more VRAM on a GPU is, which is ironic in this era where somehow GPU manufacturers deem 8 GB of VRAM to be acceptable in 2025.

Continue reading “Brazilian Modders Upgrade NVidia Geforce GTX 970 To 8 GB Of VRAM”

Musings On A Good Parallel Computer

Until the late 1990s, the concept of a 3D accelerator card was something generally associated with high-end workstations. Video games and kin would run happily on the CPU in one’s desktop system, with later extensions like MMX, 3DNow!, and SSE providing a significant performance boost for games that supported them. As 3D accelerator cards (colloquially called graphics processing units, or GPUs) became prevalent, they took over almost all SIMD vector tasks, but one thing that they’re not good at is being a general-purpose parallel computer. This really ticked [Raph Levien] off and it inspired him to cover his grievances.

Although the interaction between CPUs and GPUs has become tighter over the decades, with PCIe in particular being a big improvement over AGP and PCI, GPUs are still terrible at running arbitrary computing tasks, and even PCIe links are still glacial compared to communication within the GPU and CPU dies. With the introduction of asynchronous graphic APIs this divide became even more intense. [Raph]’s proposal is to invert this relationship.

There’s precedent for this already, with Intel’s Larrabee and IBM’s Cell processor merging CPU and GPU characteristics on a single die, though both struggled with developing for such a new kind of architecture. Sony’s PlayStation 3 was forced to add a GPU due to these issues. There is also the DirectStorage API in DirectX, which bypasses the CPU when loading assets from storage, effectively adding CPU features to GPUs.

As [Raph] notes, so-called AI accelerators also have these characteristics, with often multiple SIMD-capable, CPU-like cores. Maybe the future is Cell after all.

Closeup of the original Manchester Baby CRT screen

Modern Computing’s Roots Or The Manchester Baby

In the heart of Manchester, UK, a groundbreaking event took place in 1948: the first modern computer, known as the Manchester Baby, ran its very first program. The Baby’s ability to execute stored programs, developed with guidance from John von Neumann’s theory, marks it as a pioneer in the digital age. This fascinating chapter in computing history not only reshapes our understanding of technology’s roots but also highlights the incredible minds behind it. The original article, including a video transcript, sits here at [TheChipletter]’s.

So, what made this hack so special? The Manchester Baby, though a relatively simple prototype, was the first fully electronic computer to successfully run a program from memory. Built by a team with little formal experience in computing, the Baby featured a unique cathode-ray tube (CRT) as its memory store – a bold step towards modern computing. It didn’t just run numbers; it laid the foundation for all future machines that would use memory to store both data and instructions. Running a test to find the highest factor of a number, the Baby performed 3.5 million operations over 52 minutes. Impressive, by that time.

Despite criticisms that it was just a toy computer, the Baby’s significance shines through. It was more than just a prototype; it was proof of concept for the von Neumann architecture, showing us that computers could be more than complex calculators. While debates continue about whether it or the ENIAC should be considered the first true stored-program computer, the Baby’s role in the evolution of computing can’t be overlooked.

Continue reading “Modern Computing’s Roots Or The Manchester Baby”