The ’80s Multi-Processor System That Never Was

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

Hacked Oscilloscope Plays Breakout, Hints At More

You know things are getting real when the Dremel is one of the first tools you turn to after unboxing your new oscilloscope. But when your goal is to hack the scope to play Breakout, sometimes plastic needs to be sacrificed.

Granted, the scope in question, a Fnirsi DSO152, only cost [David Given] from Poking Technology a couple of bucks. And while the little instrument really isn’t that bad inside, it’s limited to a single channel and 200 kHz of bandwidth, so it’s not exactly lab quality. The big attractions for [David] were the CH32F103 microcontroller and the prominent debug port inside, not to mention the large color LCD panel.

[David]’s attack began with the debug port and case mods to allow access, but quickly ground to a halt when he accidentally erased the original firmware. But no matter — tracing out the pins is always an option. [David] made that easier by overlaying large photos of both sides of the board, which let him figure out which buttons went to which pins, and mapping for the display’s parallel interface. He didn’t mess with any of the analog stuff except to create a quick “Hello, oscilloscope!” program to output a square wave to the calibration pin. He did, however, create a display driver and port a game of breakout to the scope — video after the hop.

We’ve been seeing a lot of buzz around the CH32xx MCUs lately; seeing it start to show up in retail products is perhaps a leading indicator of where the cheap RISC chips are headed. We’ve seen a few interesting hacks with them, but we’ve also heard tell they can be hard to come by. Maybe getting one of these scopes to tear apart can fix that, though.

Continue reading “Hacked Oscilloscope Plays Breakout, Hints At More”

How IBM Stumbled Onto RISC

There are a ton of inventions out in the world that are almost complete accidents, but are still ubiquitous in our day-to-day lives. Things like bubble wrap which was originally intended to be wallpaper, or even superglue, a plastic compound whose sticky properties were only discovered later on. IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801.

As [Paul] explains, the major design philosophy at the time was to use a large amount of instructions to do specific tasks within the processor. When designing the special-purpose phone switch processor, IBM removed many of these instructions and then, after the project was cancelled, performed some testing on the incomplete platform to see how it performed as a general-purpose computer. They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware.

These first forays into the world of simplified processor architecture both paved the way for the RISC platforms we know today such as ARM and RISC-V, but also helped CISC platforms make tremendous performance gains as well. In fact, RISC-V is a direct descendant from these early RISC processors, with three intermediate designs between then and now. If you want to play with RISC-V yourself, our own [Jonathan Bennett] took a look at a recent RISC-V SBC and its software this past March.

Thanks to [Stephen] for the tip!

Photo via Wikimedia Commons

A 32-Bit RISC-V CPU Core In 600 Lines Of C

If you have ever wanted to implement a RISC-V CPU core in about 600 lines of C, you’re in luck! [mnurzia]’s rv project does exactly that, providing a simple two-function API.

Technically, it’s a user-level RV32IMC implementation in ANSI C. There are many different possible flavors of RISC-V, and in this case is a 32-bit base integer instruction set (RV32I), with multiplication and division extension (M), and compressed instruction set extension (C).

There’s a full instruction list and examples of use on the GitHub repository. As for readers wondering what something like RISC-V emulation might be good for, it happens to be the not-so-secret sauce to running Linux on an RP2040.

Behind The X86 Pipeline Curtain

We’ve often heard that modern x86 CPUs don’t really execute x86 instructions. Instead, they decode them into RISC instructions that are easier to schedule, pipeline, and execute. But we never really looked into that statement to see if it is true. [Fanael] did, though, and the results are very interesting.

The post starts with a very simple loop containing four instructions. In a typical RISC CPU — RISC-V — the same loop requires six instructions. However, a modern CPU is likely to do much more than just blindly convert one instruction set to another.

Continue reading “Behind The X86 Pipeline Curtain”

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

History Of The SPARC CPU Architecture

[RetroBytes] nicely presents the curious history of the SPARC processor architecture. SPARC, short for Scalable Processor Architecture, defined some of the most commercially successful RISC processors during the 1980s and 1990s. SPARC was initially developed by Sun Microsystems, which most of us associate the SPARC but while most computer architectures are controlled by a single company, SPARC was championed by dozens of players.  The history of SPARC is not simply the history of Sun.

A Reduced Instruction Set Computer (RISC) design is based on an Instruction Set Architecture (ISA) that runs a limited number of simpler instructions than a Complex Instruction Set Computer (CISC) based on an ISA that comprises more, and more complex, instructions. With RISC leveraging simpler instructions, it generally requires a longer sequence of those simple instructions to complete the same task as fewer complex instructions in a CISC computer. The trade-off being the simple (more efficient) RISC instructions are usually run faster (at a higher clock rate) and in a highly pipelined fashion. Our overview of the modern ISA battles presents how the days of CISC are essentially over. Continue reading “History Of The SPARC CPU Architecture”