Slicing And Dicing The Bits: CPU Design The Old Fashioned Way

Writing for Hackaday can be somewhat hazardous. Sure, we don’t often have to hide from angry spies or corporate thugs. But we do often write about something and then want to buy it. Expensive? Hard to find? Not needed? Doesn’t really matter. My latest experience with this effect was due to a recent article I wrote about the AM2900 bitslice family of chips. Many vintage computers and video games have them inside, and, as I explained before, they are like a building block you use to build a CPU with the capabilities you need. I had read about these back in the 1970s but never had a chance to work with them.

As I was writing, I wondered if there was anything left for sale with these chips. Turns out you can still get the chips — most of them — pretty readily. But I also found an eBay listing for an AM2900 “learning and evaluation kit.” How many people would want such a thing? Apparently enough that I had to bid a fair bit of coin to take possession of it, but I did. The board looked like it was probably never used. It had the warranty card and all the paperwork. It looked in pristine condition. Powering it up, it seemed to work well.

What Is It?

The board hardly looks at least 40  years old.

The board is a bit larger than a letter-sized sheet of paper. Along the top, there are three banks of four LEDs. The bottom edge has three banks of switches. One bank has three switches, and the other two each have four switches. Two more switches control the board’s operation, and two momentary pushbutton switches.

The heart of the device, though, is the AM2901, a 4-bit “slice.” It isn’t quite a CPU but more just the ALU for a CPU. There’s also an AM2909, which controls the microcode memory. In addition, there’s a small amount of memory spread out over several chips.

A real computer would probably have many slices that work together. It would also have a lot more microprogram memory and then more memory to store the actual program. Microcode is a very simple program that knows how to execute instructions for the CPU. Continue reading “Slicing And Dicing The Bits: CPU Design The Old Fashioned Way”

A standard-compliant MXM card installed into a laptop, without heatsink

MXM: Powerful, Misused, Hackable

Today, we’ll look into yet another standard in the embedded space: MXM. It stands for “Mobile PCI Express Module”, and is basically intended as a GPU interface for laptops with PCIe, but there’s way more to it – it can work for any high-power high-throughput PCIe device, with a fair few DisplayPort links if you need them!

You will see MXM sockets in older generations of laptops, barebones desktop PCs, servers, and even automotive computers – certain generations of Tesla cars used to ship with MXM-socketed Nvidia GPUs! Given that GPUs are in vogue today, it pays to know how you can get one in low-profile form-factor and avoid putting a giant desktop GPU inside your device.

I only had a passing knowledge of the MXM standard until a bit ago, but my friend, [WifiCable], has been playing with it for a fair bit now. On a long Discord call, she guided me through all the cool things we should know about the MXM standard, its history, compatibility woes, and hackability potential. I’ve summed all of it up into this article – let’s take a look!

This article has been written based on info that [WifiCable] has given me, and, it’s also certainly not the last one where I interview a hacker and condense their knowledge into a writeup. If you are interested, let’s chat!

Continue reading “MXM: Powerful, Misused, Hackable”

The 1970s Computer: A Slice Of Computing

What do the HP-1000 and the DEC VAX 11/730 have in common with the video games Tempest and Battlezone? More than you might think. All of those machines, along with many others from that time period, used AM2900-family bit slice CPUs.

The bit slice CPU was a very successful product that could only have existed in the 1970s. Today, if you need a computer system, there are many CPUs and even entire systems on a chip to choose from. You can also get many small board-level systems that would probably do anything you want. In the 1960s, you had no choices at all. You built circuit boards with gates on the using transistors, tubes, relays, or — maybe — small-scale IC gates. Then you wired the boards up.

It didn’t take a genius to realize that it would be great to offer people a CPU chip like you can get today. The problem is the semiconductor technology of the day wouldn’t allow it — at least, not with any significant amount of resources. For example, the Motorola MC14500B from 1977 was a one-bit microprocessor, and while that had its uses, it wasn’t for everyone or everything.

The Answer

The answer was to produce as much of a CPU as possible in a chip and make provisions to use multiple chips together to build the CPU. That’s exactly what AMD did with the AM2900 family. If you think about it, what is a CPU? Sure, there are variations, but at the core, there’s a place to store instructions, a place to store data, some way to pick instructions, and a way to operate on data (like an ALU — arithmetic logic unit). Instructions move data from one place to another and set the state of things like I/O devices, ALU operations, and the like.

Continue reading “The 1970s Computer: A Slice Of Computing”

Error-Correcting RAM On The Desktop

When running a server, especially one with mission-critical applications, it’s common practice to use error-correcting code (ECC) memory. As the name suggests, it uses an error-correcting algorithm to continually check for and fix certain errors in memory. We don’t often see these memory modules on the desktop for plenty of reasons, among which are increased cost and overhead and decreased performance for only marginal gains, but if your data is of upmost importance even when working on a desktop machine, it is possible to get these modules up and running in certain modern AMD computers.

Specifically, this feature was available on AMD Ryzen CPUs, but since the 7000 series with the AM5 socket launched, the feature wasn’t officially supported anymore. [Rain] decided to upgrade their computer anyway, but there were some rumors floating around the Internet that this feature might still be functional. An upgrade to the new motherboard’s UEFI was required, as well as some tweaks to the Linux kernel to make sure there was support for these memory modules. After probing the system’s behavior, it is verified that the ECC RAM is working and properly reporting errors to the operating system.

Reporting to the OS and enabling the correct modules is one thing, actually correcting an error was another. It turns out that introducing errors manually and letting the memory correct them is possible as well, and [Rain] was able to perform this check during this process as well. While ECC RAM may be considered overkill for most desktop users, it offers valuable data integrity for professional or work-related tasks. Just don’t use it for your Super Mario 64 speedruns.

A Dedicated GPU For Your Favorite SBC

The Raspberry Pi is famous for its low cost, versatile and open Linux environment, and plentiful I/O, making it a perfect device not only for its originally-intended educational purposes but for basically every hobbyist from gardeners to roboticists to amateur radio operators. Most builds tend to make use of the GPIO pins which allow easy connections to various peripherals and sensors, but the Pi also supports PCI devices which means that, in theory, it could use a GPU in much the same way that a modern computer would. After plenty of testing and development, [Jeff Geerling] brings us this custom graphics card interface for the Raspberry Pi.

The testing for all of these graphics cards has been done with a Pi Compute Module 4 and the end result is an interface device which looks much like a graphics card itself. It splits the PCI bus out onto a more familiar x16 slot connector and adds physical connections for power, USB, and Ethernet. When plugged into the carrier board, the Compute Module can be attached to any of a number of graphics cards, including the latest and highest-end of Nvidia and AMD offerings.

Perhaps unsurprisingly, though, the 4090 and 7900 cards don’t work with the Raspberry Pi. This is partially due to the 32-bit limitations of the Pi and other memory mapping issues, but even after attempting some workarounds Nvidia’s cards aren’t open-source enough to test properly (although the card is recognized by the Pi) and AMD’s drivers crash the system even after compiling a custom kernel. [Jeff] did find an Nvidia card that worked, although it requires using the USB interface and second-hand cards are selling for around $3000 USD. For a more economical choice there are some other graphics cards that he was eventually able to get working, albeit not with perfect performance, including some of the ones we’ve seen him test already.

Continue reading “A Dedicated GPU For Your Favorite SBC”

a full gaming rig built into a LCD-386

A Portable Computer Living In 1988 But Also In The Future

Every once in a while, there will be a project that is light on details but inundated with glorious, drool-worthy pictures. [Nexaner7] recently showed off his cyberdeck he built over a year inside an old LCD-386. So what’s special about it? This isn’t just a Raspberry Pi or some SBC inside but a complete AMD Ryzen 5600, Nvidia RTX 3060, screen, and keyboard in a 19.5-liter space (0.68 cubic feet). Since there wouldn’t be enough space inside for decent airflow, he decided to water-cool everything, which added to the build.

the back of the sleeper LCD-386 cyberdeck

While [Nexaner7] doesn’t have a video walkthrough, he does have a build log with dozens of pictures in two parts: part 1 and part 2. As you can imagine, there were copious amounts of 3d printing for brackets and holders, trying various screens and GPUs to see what fit and what didn’t. He tried to use the original keyboard, even with a 5-pin DIN to PS2 to USB adapter, but the keyboard was flakey, likely due to rust. He dropped in a CM Quickfire TK PCB with a few modifications as it was close to the same size. He swapped the display for a 1440p portable monitor with a thin ribbon HDMI cable to route from the GPU to the screen.

We’re happy to report that the parts inside were sold to someone who restores old PC, so a somewhat rare LCD-386 wasn’t destroyed. With a gorgeous build like this, perhaps he should enter the Cyberdeck contest. Eagle-eyed readers might notice that recently we covered an LCD-386 with its contents retrieved via a hacked-together serial bus.

Hackaday Links Column Banner

Hackaday Links: July 3, 2022

Looks like we might have been a bit premature in our dismissal last week of the Sun’s potential for throwing a temper tantrum, as that’s exactly what happened when a G1 geomagnetic storm hit the planet early last week. To be fair, the storm was very minor — aurora visible down to the latitude of Calgary isn’t terribly unusual — but the odd thing about this storm was that it sort of snuck up on us. Solar scientists first thought it was a coronal mass ejection (CME), possibly related to the “monster sunspot” that had rapidly tripled in size and was being hyped up as some kind of planet killer. But it appears this sneak attack came from another, less-studied phenomenon, a co-rotating interaction region, or CIR. These sound a bit like eddy currents in the solar wind, which can bunch up plasma that can suddenly burst forth from the sun, all without showing the usually telltale sunspots.

Then again, even people who study the Sun for a living don’t always seem to agree on what’s going on up there. Back at the beginning of Solar Cycle 25, NASA and NOAA, the National Oceanic and Atmospheric Administration, were calling for a relatively weak showing during our star’s eleven-year cycle, as recorded by the number of sunspots observed. But another model, developed by heliophysicists at the U.S. National Center for Atmospheric Research, predicted that Solar Cycle 25 could be among the strongest ever recorded. And so far, it looks like the latter group might be right. Where the NASA/NOAA model called for 37 sunspots in May of 2022, for example, the Sun actually threw up 97 — much more in line with what the NCAR model predicted. If the trend holds, the peak of the eleven-year cycle in April of 2025 might see over 200 sunspots a month.

So, good news and bad news from the cryptocurrency world lately. The bad news is that cryptocurrency markets are crashing, with the flagship Bitcoin falling from its high of around $67,000 down to $20,000 or so, and looking like it might fall even further. But the good news is that’s put a bit of a crimp in the demand for NVIDIA graphics cards, as the economics of turning electricity into hashes starts to look a little less attractive. So if you’re trying to upgrade your gaming rig, that means there’ll soon be a glut of GPUs, right? Not so fast, maybe: at least one analyst has a different view, based mainly on the distribution of AMD and NVIDIA GPU chips in the market as well as how much revenue they each draw from crypto rather than from traditional uses of the chips. It’s important mainly for investors, so it doesn’t really matter to you if you’re just looking for a graphics card on the cheap.

Speaking of businesses, things are not looking too good for MakerGear. According to a banner announcement on their website, the supplier of 3D printers, parts, and accessories is scaling back operations, to the point where everything is being sold on an “as-is” basis with no returns. In a long post on “The Future of MakerGear,” founder and CEO Rick Pollack says the problem basically boils down to supply chain and COVID issues — they can’t get the parts they need to make printers. And so the company is looking for a buyer. We find this sad but understandable, and wish Rick and everyone at MakerGear the best of luck as they try to keep the lights on.

And finally, if there’s one thing Elon Musk is good at, it’s keeping his many businesses in the public eye. And so it is this week with SpaceX, which is recruiting Starlink customers to write nasty-grams to the Federal Communications Commission regarding Dish Network’s plan to gobble up a bunch of spectrum in the 12-GHz band for their 5G expansion plans. The 3,000 or so newly minted experts on spectrum allocation wrote to tell FCC commissioners how much Dish sucks, and how much they love and depend on Starlink. It looks like they may have a point — Starlink uses the lowest part of the Ku band (12 GHz – 18 GHz) for data downlinks to user terminals, along with big chunks of about half a dozen other bands. It’ll be interesting to watch this one play out.