So What Is A Supercomputer Anyway?

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Today’s Supercomputers

ORNL's Summit supercomputer, fastest until 2020 (Credit: ORNL)
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)

Perhaps a fair way to classify supercomputers  is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale level machines — they carry out exaFLOPS in double precision IEEE 754 calculations — based around commodity x86_64 CPUs in a massively parallel configuration.

Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.

At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.

Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.

The Parallel Computing Shift

ILLIAC IV massively parallel computer's Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)

The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.

There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.

Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.

Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.

Mini And Maxi

David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
David Lovett of Usagi Electri sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)

One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. (At this time ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.)

The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.

The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.

Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.

Speed Versus Reliability

Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.

This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.

Considering the brief lifespan of supercomputers, currently in the order of a few years, compared to decades with mainframes and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.

Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)

13 thoughts on “So What Is A Supercomputer Anyway?

  1. The phone in my pocket is more powerful than a 1970s room-sized computer. In 50 years time will a pocket-sized device be more powerful than todays room-sized computer? Sadly I sm unlikely to find out.

  2. “Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware.”

    Well, yes and no. At some point, it causes a high amount of processing power just to manage so many parallel processors or processes.
    Parallel processing causes a high burden on the scheduler that shouldn’tbe underestimated, I mean.
    So I think that the 1024 chicken vs 2 oxen comparison wasn’t as foolish as it may seem.
    Having a few oxen under control is less troublesome than lots of chicken. ;)

    1. For years, I have been wondering how the work can be administered and split up to the various processors in a way that’s not just more work than the administrator doing the work itself.  I suppose it depends on the type of work.  I have a chance to do massive parallel processing on a small scale to experiment, but I have not started yet.

    1. I often think of if as a “big iron”, a mainframe, a host computer.

      The old big computers of the early days did distinct themselves from ordinary modern PCs by using terminal devices and by using time-sharing/multi-user concept.

      PCs, as we know them today, didn’t use this concept before MP/M, I think.
      That’s when terminals and time-sharing came into play.
      Concurrent DOS, PC-MOS/386 or Wendin DOS offered similar in the 80s, I think.

      In the mid-20th century there also were so-called “process computers”,
      which had the job of controlling something (machinery, other computers) or processing lots of data.

      To some degree, computers in rocket stages could be called “supercomputers” maybe.
      The amount of computing they performed “on the fly” was comparably enormous in the 1960s and 70s.
      Not unlike the powerful computer imagined on board of the USS Enterprise (TOS). ;)

    1. Interesting term. Historically, a “mini” was a minicomputer, a computer that had the size of a desk instead of a cabinet.
      Likewise, a “micro” was a microcomputer, using a microprocessor, which did fit on a desk.
      The British also used to describe home computers as “micros”, I think.
      In the 1980s, at least. Again, very interesting, I think.
      The 80386 was being called “a mainframe on-a-chip”, too.
      Probably because of its power, but also it’s priority model (rings 0 to 3) and protected-mode and powerful memory-managment unit.

  3. “and the many years that the microcomputers which we have on our desks can last” .
    This is really only recent history ‘performance-wise’. When we were running C64s, then x86s, then x86_64, it seemed like we were always wanting more ‘performance’ and changing out computers. Now that isn’t so for most of us. My 5900x and 5600x are still ‘screaming fast’ for the desktops and VMs. Only reason to jump to latest is because either the current system dies, or because we just ‘want to’ for bragging rights. At least for me, I don’t have a ‘logical’ reason to justify buying a ‘new’ system (AI is not a valid reason nor silly games) for the fore-see-able future.

    I found it interesting that the RPI-1 was 4.5 times faster than the super computer CRAY-1 in 1978. The RPI-4 was around 50 times as fast. And of course the RPI-5 is way faster than the RPI-4. So a little $70 credit card sized computer only using 25W full bore. Compared to the Cray-1 at $7,000,000, 10,500 pounds and 115KW of power. Come a long ways…

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.