The 1970s Computer: A Slice Of Computing

What do the HP-1000 and the DEC VAX 11/730 have in common with the video games Tempest and Battlezone? More than you might think. All of those machines, along with many others from that time period, used AM2900-family bit slice CPUs.

The bit slice CPU was a very successful product that could only have existed in the 1970s. Today, if you need a computer system, there are many CPUs and even entire systems on a chip to choose from. You can also get many small board-level systems that would probably do anything you want. In the 1960s, you had no choices at all. You built circuit boards with gates on the using transistors, tubes, relays, or — maybe — small-scale IC gates. Then you wired the boards up.

It didn’t take a genius to realize that it would be great to offer people a CPU chip like you can get today. The problem is the semiconductor technology of the day wouldn’t allow it — at least, not with any significant amount of resources. For example, the Motorola MC14500B from 1977 was a one-bit microprocessor, and while that had its uses, it wasn’t for everyone or everything.

The Answer

The answer was to produce as much of a CPU as possible in a chip and make provisions to use multiple chips together to build the CPU. That’s exactly what AMD did with the AM2900 family. If you think about it, what is a CPU? Sure, there are variations, but at the core, there’s a place to store instructions, a place to store data, some way to pick instructions, and a way to operate on data (like an ALU — arithmetic logic unit). Instructions move data from one place to another and set the state of things like I/O devices, ALU operations, and the like.

Continue reading “The 1970s Computer: A Slice Of Computing”

How To Build Your Own 16-Bit System-on-Spreadsheet

Back in the hazy days of the  early home computers, many of us would rejoice at running our first BASIC applications, some of us even built our own 8-bit system from a handful of ICs and felt elated the moment the connected LEDs, screen or other output device would show signs of life. It is this kind of excitement that [Inkbox] has managed to bring to the bane of every office worker: spreadsheet programs like Excel. How, you may ask? Why, by implementing a completely functional 16-bit system with 16 general purpose registers, 128 kB of RAM and a 128×128 pixel color display, all inside an Excel spreadsheet, making it conceivably the world’s first System-on-Spreadsheet (SoS).

Perhaps the most tantalizing aspect of this approach is that it provides a very good visual way to indicate what is happening inside the system using color codes and clearly segregated and marked functional elements. Not only can it be programmed manually, but [Inkbox] also created an assembler for the CPU’s ISA – called Excel-ASM16 – all of which is available from the ExcelCPU GitHub project page. The ASM is assembled into a ROM.xlsx file that can then be run by the CPU.xlsx file by triggering the Read ROM button. After this you are confronted with the realization that although it all works, it’s also incredibly slow, at about 2-3 Hz.

Still, with all the elegance of an IMSAI 8080 front panel, we cannot help but give full points for this achievement. Plus it gives many of us something to do during those exceedingly dull meetings where only serious applications like office suites are allowed.

Continue reading “How To Build Your Own 16-Bit System-on-Spreadsheet”

Clockhands For Faster CPU Execution

When you design your first homebrew CPU, you probably are happy if it works and you don’t worry as much about performance. But, eventually, you’ll start trying to think about how to make things run faster. For a single CPU, the standard strategy is to execute multiple instructions at the same time. This is feasible because you can do different parts of the instructions at the same time. But like most solutions, this one comes with a new set of problems. Japanese researchers are proposing a novel way to work around some of those problems in a recent paper about a technique they call Clockhands.

Suppose you have a set of instructions like this:

LOAD A, 10
LOAD B, 20
SUB A,B
LOAD B, 30
JMPZ  DONE
INC B

If you do these one at a time, you have no problem. But if you try to execute them all together, there are a variety of problems. First, the subtract has to wait for A and B to have the proper values in them. Also, the INC B may or may not execute, and unless we know the values of A and B ahead of time (which, of course, we do here), we can’t tell until run time. But the biggest problem is the subtract has to use B before B contains 30, and the increment has to use it afterward. If everything is running together, it can be hard to keep straight.

Continue reading “Clockhands For Faster CPU Execution”

Intel’s Chips Light The Way To Faster Processor Arrays

It’s very likely indeed that whatever you are reading this on will have a multi-core processor. They’re now the norm, but the path to they octa-or-more-core chip in your phone has gone from individual processors with PCB interconnects through many generations of ever faster on-chip ones.

But what if your power needs are so high-end that you need more cores that can be fitted on one chip, but without the slow PCB interconnect to another? If you’re Intel, you develop a multi-core processor with an on-chip photonic interconnect. It talks to the neighboring ones in its cluster at full speed, via light.

The chip in question isn’t one you’ll see in a machine near you, instead it’s inspired by the extremely demanding requirements for DARPA’s HIVE graph analytics program. So this is a machine for supercomputers in huge data centers rather than desktop computers, it will be assembled into multi-die packages with that chip-to-chip optical networking built in. But your computer today is the equal of a supercomputer from not that many years ago, so never say you won’t one day be using its descendant technologies.

A Turing-Complete CPU In Sunvox? Why Not!

Day-time software engineer and part-time musician, [Logickin,] knows a thing or two about programming the SunVox modular synthesiser and tracker software. Whilst the software is normally used for creating music and sound effects, they decided to really push it, and create the VOXCOM-1610, a functional turing-complete CPU inside SunVox, just for fun.

For those who haven’t come across SunVox before now, this software is a highly programmable visual environment for building up custom synthesisers, piecing signals together to create rhythms — that’s the ‘tracker’ bit — as well as interfacing to input devices such as MIDI and many others. It does look like a lot of fun, but just like CPUs created in Minecraft, just because, this seems to be the first time someone has built one inside this particular music app. The VOXCOM 1610 is a fully functional 10 Hz, 16-bit computer. It boasts 2KB of ROM, 256 bytes of RAM (expandable to 128 KB), and 8 general registers for data exchange between components. If you don’t fancy manually poking bits into the ROM to enter your software, then you’re in luck as [Logickin] has provided an assembler (in Java) that should ease the process a lot. The ABI will look very familiar to anyone who’s ever touched assembler before, although as you’d expect, it is quite light on addressing modes.

Now, all that is needed is for someone to port Doom to this and we’ll have it all. We think that is unlikely to happen. For those who pay attention, we did see one neat SunVox project in the past, which is certainly eye-catching as well as eardrum-bursting.

Thanks to [elbien] for the tip!

Minecraft In Minecraft On The CHUNGUS II

Minecraft is a simple video game. Well, it’s a simple video game that also has within it the ability to create all of the logic components that you’d need to build a computer. And building CPUs in Minecraft is by now a long-standing tradition.

Enter CHUNGUS II. The Computational Humongous Unconventional Number and Graphics Unit by [Sammyuri] is the biggest and baddest Minecraft computer that we’ve ever seen. So big, in fact, that it was finally reasonable to think about porting a stripped-down version of Minecraft to the computer itself. Yes, that’s right, Minecraft running in Minecraft. (Video embedded below.) Writing the compiler and programming the game brought two more hackers to the party, [Uwerta] and [StackDoubleFlow], and quite honestly, we’re amazed that a team as small as three people pulled this off.

Anyway, once you’ve picked your jaw up off the floor, also check out [Sammyuri]’s video on just the CHUNGUS II computer itself. (Also embedded below.) Seeing the architecture is interesting, even if you don’t speak Redstone as fluently as our heroes here. We love that the assembler creates a block of ROM – out of Minecraft blocks – that you can then cut/paste into the game’s reality.

For a “simple” game about breaking blocks and punching trees, Minecraft has inspired hackers to make the game better both inside and outside of the real world. For instance, for the latest in performant open-source Minecraft servers, check out Folia. Maybe, one day, they’ll build CHUNGUS II in the real world. It could happen.

Thanks [dbcdr] for the tip!

Continue reading “Minecraft In Minecraft On The CHUNGUS II”

8086 Multiply Algorithm Gets Reverse Engineered

The 8086 has been around since 1978, so it’s pretty well understood. As the namesake of the prevalent x86 architecture, it’s often studied by those looking to learn more about microprocessors in general. To this end, [Ken Shirriff] set about reverse engineering the 8086’s multiplication algorithm.

[Ken]’s efforts were achieved by using die photos of the 8086 chip. Taken under a microscope, they can be used to map out the various functional blocks of the microprocessor. The multiplication algorithm can be nutted out by looking at the arithmetic/logic unit, or ALU. However, it’s also important to understand the role that microcode plays, too. Even as far back as 1978, designers were using microcode to simplify the control logic used in microprocessors.

[Ken] breaks down his investigation into manageable chunks, exploring how the chip achieves both 8-bit and 16-bit multiplication in detail. He covers how the numbers make their way through various instructions and registers to come out with the right result in the end.

It’s a fun look at what’s going on at the ground level in a chip that’s been around since before the personal computer revolution. For any budding chip designers, it’s a great academic exercise to follow along at home. If you’ve been doing your own digging deep into CPU architectures, don’t hesitate to drop us a line!