How IBM Stumbled Onto RISC

There are a ton of inventions out in the world that are almost complete accidents, but are still ubiquitous in our day-to-day lives. Things like bubble wrap which was originally intended to be wallpaper, or even superglue, a plastic compound whose sticky properties were only discovered later on. IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801.

As [Paul] explains, the major design philosophy at the time was to use a large amount of instructions to do specific tasks within the processor. When designing the special-purpose phone switch processor, IBM removed many of these instructions and then, after the project was cancelled, performed some testing on the incomplete platform to see how it performed as a general-purpose computer. They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware.

These first forays into the world of simplified processor architecture both paved the way for the RISC platforms we know today such as ARM and RISC-V, but also helped CISC platforms make tremendous performance gains as well. In fact, RISC-V is a direct descendant from these early RISC processors, with three intermediate designs between then and now. If you want to play with RISC-V yourself, our own [Jonathan Bennett] took a look at a recent RISC-V SBC and its software this past March.

Thanks to [Stephen] for the tip!

Photo via Wikimedia Commons

A 32-Bit RISC-V CPU Core In 600 Lines Of C

If you have ever wanted to implement a RISC-V CPU core in about 600 lines of C, you’re in luck! [mnurzia]’s rv project does exactly that, providing a simple two-function API.

Technically, it’s a user-level RV32IMC implementation in ANSI C. There are many different possible flavors of RISC-V, and in this case is a 32-bit base integer instruction set (RV32I), with multiplication and division extension (M), and compressed instruction set extension (C).

There’s a full instruction list and examples of use on the GitHub repository. As for readers wondering what something like RISC-V emulation might be good for, it happens to be the not-so-secret sauce to running Linux on an RP2040.

Behind The X86 Pipeline Curtain

We’ve often heard that modern x86 CPUs don’t really execute x86 instructions. Instead, they decode them into RISC instructions that are easier to schedule, pipeline, and execute. But we never really looked into that statement to see if it is true. [Fanael] did, though, and the results are very interesting.

The post starts with a very simple loop containing four instructions. In a typical RISC CPU — RISC-V — the same loop requires six instructions. However, a modern CPU is likely to do much more than just blindly convert one instruction set to another.

Continue reading “Behind The X86 Pipeline Curtain”

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

History Of The SPARC CPU Architecture

[RetroBytes] nicely presents the curious history of the SPARC processor architecture. SPARC, short for Scalable Processor Architecture, defined some of the most commercially successful RISC processors during the 1980s and 1990s. SPARC was initially developed by Sun Microsystems, which most of us associate the SPARC but while most computer architectures are controlled by a single company, SPARC was championed by dozens of players.  The history of SPARC is not simply the history of Sun.

A Reduced Instruction Set Computer (RISC) design is based on an Instruction Set Architecture (ISA) that runs a limited number of simpler instructions than a Complex Instruction Set Computer (CISC) based on an ISA that comprises more, and more complex, instructions. With RISC leveraging simpler instructions, it generally requires a longer sequence of those simple instructions to complete the same task as fewer complex instructions in a CISC computer. The trade-off being the simple (more efficient) RISC instructions are usually run faster (at a higher clock rate) and in a highly pipelined fashion. Our overview of the modern ISA battles presents how the days of CISC are essentially over. Continue reading “History Of The SPARC CPU Architecture”

The 10 Kinds Of Programmers That Use Calcutron-33

It is interesting how, if you observe long enough, things tend to be cyclical. Back in the old days, some computers didn’t use binary, they used decimal. This was especially true of made up educational computers like TUTAC or CARDIAC, but there was real decimal hardware out there, too. Then everyone decided that binary made much more sense and now it’s very hard to find a computer that doesn’t use it.

But [Erik] has written a simulator, assembler, and debugger for Calcutron-33, a “decimal RISC” CPU. Why? The idea is to provide a teaching platform to explain assembly language concepts to people who might stumble on binary numbers. Once they understand Calcutron, they can move on to more conventional CPUs with some measure of confidence.

To that end, there are several articles covering the basic architecture, the instruction set, and how to write assembly for the machine. The CPU has much in common with modern microprocessors other than the use of decimal throughout.

There have been several versions of the virtual machine with various improvements and bug fixes. We’ll be honest: we admire the work and its scope. However, if you already know about binary, this might not be your best bet. What’s more is, maybe you should understand binary before tackling assembly language programming, at least in modern times. Still, it does cover a lot of ground that applies regardless.

Made-up computers like TUTAC and CARDIAC were all the rage when computer time was too expensive to waste on mere students. There was also MIX from computer legend Donald Knuth.

MikroLeo, A 4-Bit Retro Learning Platform

MikroLeo is a discrete TTL logic-based microcomputer intended for educational purposes created by [Edson Junior Acordi], an Electronics Professor at the Brazilian Federal Institute of Paraná, Brazil. The 4-bit CPU has a Harvard RISC architecture built entirely from 74HCT series logic mounted on a two-sided PCB using only through-hole parts. With 2K words of instruction RAM and 2K words of addressable RAM, the CPU has a similar resource level to comparable machines of old, giving students a feel for how to work within tight constraints.

Simulation of the circuit is possible with digital, with the dedicated PCB designed with KiCAD, so there should be enough there to get cracking with it. Four 4-bit IO ports make interfacing easy, with dedicated INput and OUTput instructions for the purpose. An assembler, compiler, and emulator are all being worked on (as far as we can tell) so keep an eye out for that, if this project is of interest to you.

We like computers a bit around these parts, the “hackier” and weirder the better. Even just in the 4-bit retro space, we’ve seen so many, from those built around ancient ALU chips to those built from discrete transistors and diodes, but you don’t need to go down that road, an emulation platform can scratch that retro itch, without the same level of pain.