A New Cartridge For An Old Computer

Although largely recognizable to anyone who had a video game console in the 80s or 90s, cartridges have long since disappeared from the computing world. These squares of plastic with a few ROM modules were a major route to get software for a time, not only for consoles but for PCs as well. Perhaps most famously, the Commodore VIC-20 and Commodore 64 had cartridge slots for both gaming and other software packages. As part of the Chip Hall of Fame created by IEEE Spectrum, [James] found himself building a Commodore cartridge more than three decades after last working in front of one of these computers.

[James] points out that even by the standards of the early 80s the Commodore cartridges were pretty low on specs. They’re limited to 16 kB, which means programming in assembly and doing things like interacting with video hardware directly. Luckily there’s a treasure trove of documentation about the C64 nowadays as well as a number of modern programming tools for them, in contrast to the 80s when tools and documentation were scarce or nonexistent. Hardware these days is cheap as well; the cartridge PCB and other hardware cost only a few dollars, and the case for it can easily be 3D printed.

Burning the software to the $3 ROM chip was straightforward as well with a TL866 programmer, although [James] left a piece of memory management code in the first pass which caused the C64 to lock up. Removing this code and flashing the chip again got the demo up and running though, and it’ll be on display at their travelling “Chips that Changed the World” exhibit. If you find yourself in the opposite situation, though, we’ve also seen projects that cleverly pull the data off of ancient C64 ROM chips for preservation.

Static Electricity Remembers

As humans we often think we have a pretty good handle on the basics of the way the world works, from an intuition about gravity good enough to let us walk around, play baseball, and land spacecraft on the moon, or an understanding of electricity good enough to build everything from indoor lighting to supercomputers. But zeroing in on any one phenomenon often shows a world full of mystery and surprise in an area we might think we would have fully understood by now. One such area is static electricity, and the way that it forms within certain materials shows that it can impart a kind of memory to them.

The video demonstrates a number of common ways of generating static electricity that most of us have experimented with in the past, whether on purpose or accidentally, from rubbing a balloon on one’s head and sticking it to the wall or accidentally shocking ourselves on a polyester blanket. It turns out that certain materials like these tend to charge themselves positively or negatively depending on what material they were rubbed against, but some researchers wondered what would happen if an object were rubbed against itself. It turns out that in this situation, small imperfections in the materials cause them to eventually self-order into a kind of hierarchy, and repeated charging of these otherwise identical objects only deepen this hierarchy over time essentially imparting a static electricity memory to them.

The effect of materials to gain or lose electrons in this way is known as the triboelectric effect, and there is an ordering of materials known as the triboelectric series that describes which materials are more likely to gain or lose electrons when brought into contact with other materials. The ability of some materials, like quartz in this experiment, to develop this memory is certainly an interesting consequence of an otherwise well-understood phenomenon, much like generating power for free from static electricity that’s always present within the atmosphere might surprise some as well.

Continue reading “Static Electricity Remembers”

Your Own Core Rope Memory

If you want read-only memory today, you might be tempted to use flash memory or, if you want old-school, maybe an EPROM. But there was a time when that wasn’t feasible. [Igor Brichkov] shows us how to make a core rope memory using a set of ferrite cores and wire. This was famously used in early UNIVAC computers and the Apollo guidance computer. You can see how it works in the video below.

While rope memory superficially resembles core memory, the principle of operation is different. In core memory, the core’s magnetization is what determines any given bit. For rope memory, the cores are more like a sensing element. A set wire tries to flip the polarity of all cores. An inhibit signal stops that from happening except on the cores you want to read. Finally, a sense wire weaves through the cores and detects a blip when a core changes polarity. The second video, below, is an old MIT video that explains how it works (about 20 minutes in).

Why not just use core memory? Density. These memories could store much more data than a core memory system in the same volume. Of course, you could write to core memory, too, but that’s not always a requirement.

We’ve seen a resurgence of core rope projects lately. Regular old core is fun, too.

Continue reading “Your Own Core Rope Memory”

A Delay Line Memory Demo Board

Delay line memory is a technology from yesteryear, but it’s not been entirely forgotten. [P-Lab] has developed a demo board for delay-line memory, which shows how it worked in a very obvious way with lots of visual aids.

If you’re unfamiliar with the technology, it’s a form of memory that was used in classic computers like the Univac-I and the Olivetti Programma 101. It’s a sequential-access technology, where data is stored as pulses in some kind of medium, and read out in order. Different forms of the technology exist, such as using acoustic pulses in mercury or torsional waves passing through coiled nickel wire.

In this case, [P-Lab] built a solid state delay line using TTL ICs, capable of storing a full 64 bits of information and running at speeds of up to 150 kHz. It also features a write-queuing system to ensure bits are written at the exact correct time — the sequential-access nature of the technology means random writes and reads aren’t actually possible. The really cool thing is that [P-Lab] paired the memory with lots of LEDs to show how it works. There are lights to indicate the operation of the clock, and the read and write cycles, as well as individual LEDs indicating the status of each individual bit as they roll around the delay line. Combined with the hexadecimal readouts, it makes it easy to get to grips with this old-school way of doing things.

We’ve seen previous work from[P-Lab] in this regard using old-school core rope memory, too. Continue reading “A Delay Line Memory Demo Board”

Fitting A Spell Checker Into 64 KB

By some estimates, the English language contains over a million unique words. This is perhaps overly generous, but even conservative estimates generally put the number at over a hundred thousand. Regardless of where the exact number falls between those two extremes, it’s certainly many more words than could fit in the 64 kB of memory allocated to the spell checking program on some of the first Unix machines. This article by [Abhinav Upadhyay] takes a deep dive on how the early Unix engineers accomplished the feat despite the extreme limitations of the computers they were working with.

Perhaps the most obvious way to build a spell checker is by simply looking up each word in a dictionary. With modern hardware this wouldn’t be too hard, but disks in the ’70s were extremely slow and expensive. To move the dictionary into memory it was first whittled down to around 25,000 words by various methods, including using an algorithm to remove all affixes, and then using a Bloom filter to perform the lookups. The team found that this wasn’t a big enough dictionary size, and had to change strategies to expand the number of words the spell checker could check. Hash compression was used at first, followed by hash differences and then a special compression method which achieved an almost theoretically perfect compression.

Although most computers that run spell checkers today have much more memory as well as disks which are orders of magnitude larger and faster, a lot of the innovation made by this early Unix team is still relevant for showing how various compression algorithms can be used on data in general. Large language models, for one example, are proving to be the new frontier for text-based data compression.

Dog Plays Chess On ESP32

The ESP32 is s remarkably powerful microcontroller, where its dual-core processor and relatively high clock speed can do some impressive work. But getting this microcontroller designed for embedded systems to do tasks that would generally be given to a much more powerful PC-type computer takes a little bit more willpower. Inspired by his dog, [Folkert] decided to program an ESP32 to play chess, a famously challenging task for computer scientists in the past. He calls this ESP32 chess system Dog.

One of the other major limitations of this platform for a task like this is memory. The ESP32 [Folkert] is using only has 320 kB of RAM, so things like the transposition table have to fit in even less space than that. With modern desktop computers often having 32 or 64 GB, this is a fairly significant challenge, especially for a memory-intensive task like a chess engine. But with the engine running on the microcontroller it’s ready to play, either in text mode or with something that can use the Universal Chess Interface (UCI). A set of LEDs on the board lets the user know what’s going on while gameplay is taking place.

Continue reading “Dog Plays Chess On ESP32”

Quake In 276 KB Of RAM

Porting the original DOOM to various pieces of esoteric hardware is a rite of passage in some software circles. But in the modern world, we can get better performance than the 386 processor required to run the 1993 shooter for the cost of a dinner at a nice restaurant — with plenty of other embedded systems blowing these original minimum system requirements out of the water.

For a much tougher challenge, a group from Silicon Labs decided to port DOOM‘s successor, Quake, to the Arduino Nano Matter Board platform instead even though this platform has some pretty significant limitations for a game as advanced as Quake.

To begin work on the memory problem, the group began with a port of Quake originally designed for Windows, allowing them to use a modern Windows machine to whittle down the memory usage before moving over to hardware. They do have a flash memory module available as well, but there’s a speed penalty with this type of memory. To improve speed they did what any true gamer would do with their system: overclock the processor. This got them to around 10 frames per second, which is playable, but not particularly enjoyable. The further optimizations to improve the FPS required a much deeper dive which included generating lookup tables instead of relying on computation, optimizing some of the original C programming, coding some functions in assembly, and only refreshing certain sections of the screen when needed.

On a technical level, Quake was a dramatic improvement over DOOM, allowing for things like real-time 3D rendering, polygonal models instead of sprites, and much more intricate level design. As a result, ports of this game tend to rely on much more powerful processors than DOOM ports and this team shows real mastery of their hardware to pull off a build with a system with these limitations. Other Quake ports we’ve seen like this one running on an iPod Classic require a similar level of knowledge of the code and the ability to use assembly language to make optimizations.

Thanks to [Nicola] for the tip!