The ’80s Multi-Processor System That Never Was

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

25 thoughts on “The ’80s Multi-Processor System That Never Was

  1. unless you count plan 9 which IMHO you should. It was an early adopter of a distributed OS and IIRC the original cisco pix used a modified version of plan 9. Granted it is not in the same CPU but it should not be ignored either as been a path leader IMHO as well.

    1. The IBM 360/65, released in March 1965 could be configured as a dual-processor system supported by the MVT MP65 operating system. Although not fully symmetric with respect to I/O, later 370/158 and 370/168 models were.

      1. The late 1960s / early 1970s saw the introduction of the IBM 9020 system which was a S/360 multiprocessor. Max configuration was 4 x modified Model 65s, based on the standard Model 65 but 4-way multiprocessor rather than 2-way. Incidentally, the never-delivered Model 62 was a 4-way multiprocessor.

  2. Inmos did something similar with Transputer in the 80s…
    Multi processor parallel systems with a serial bus.
    This looks like a development of that with risc and a better memory model

        1. No-one you know of, someone I knew made a very nice career for themselves developing military stuff using Transputers.

          There were plenty of very expensive, niche and bespoke systems using them, quite a few are listed on Wikipedia and there were more so I suspect you’re using ‘seriously’ to mean it wasn’t a mass market success rather than there were no serious uses of it?

          They definitely weren’t ‘bad’ chips, but they weren’t able to compete with the pricing and ease of use of other (less advanced?) chips on the market.

        2. That is not true. Transputers where the go-to solution for problems requiring massively parallel processing, for both civilian and military applications. Particularly successful in image processing.

  3. ??? OS/2 and BeBox were SMP and OS/2 supported non-symmetrical. I owned a Compaq server that had a 386 and a 486 processor and preferred OS/2.
    The Be machines supported 2 processers from the beginning.

  4. The first paragraph is very incoherent. We’ve had multi-processor systems for “general use” since the eighties. The AS/400 supported dual processors by 1991. SMP DEC Alpha, SPARC, x86, RISC, PA-RISC systems were a common sight in the 1990s. Even the odd BeBox inbetween. Linux got SMP support in 1995. Even the SPARCStation 10 on my desk had a dual-CPU Module. Most Netware servers I had seen were dual machines.

    And we can’t be talking about “for general use” in the sense of “for consumer use” here, because neither POWER4 nor Opterons nor SPU RISC were aimed at consumers.

  5. Mmmmm summary makes it sound a lot like multiple processors and multiple cores on the die are interchangeable concepts.

    Didn’t all our software get slower a few years back because of the tradeoff of patching some security holes that were found to be enabled by multi-core chips? Security holes that don’t exist in multi-cpu machines where it’s just one core per die, not sharing any cache or whatever other internals multiple cores share?

    Clearly I don’t know all the technical details but.. I’m still going to say that multi-core isn’t really the same as multi-cpu.

    I did have a two-processor computer back sometime around 2000 or 2001. That thing ran great for the time! It sure was expensive to upgrade the RAM though as it used ECC. And the electricity it used… and the heat it generated… did I mention those were AMD CPUs? Like computing on an electric stove!

  6. Sorry, but the summary of multiprocessor systems correspond to actual history. Even vaguely.

    I worked on 36 R4400 (64 bit) processor SGI systems (Challenge XL’s) in 1994. I managed an 18 i386 x processor Sequent Symmetry systems in 1991.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.