Everyone Needs A Personal Supercomputer

When you think of supercomputers, visions of big boxes and blinkenlights filling server rooms immediately appear. Since the 90s or thereabouts, these supercomputers have been clusters of computers, all working together on a single problem. For the last twenty years, people have been building their own ‘supercomputers’ in their homes, and now we have cheap ARM single board computers to play with. What does this mean? Personal supercomputers. That’s what [Jason Gullickson] is building for his entry to the Hackaday Prize.

The goal of [Jason]’s project isn’t to break into the Top 500, and it’s doubtful it’ll be more powerful than a sufficiently modern desktop workstation. The goal for this project is to give anyone a system that has the same architecture as a large-scale cluster to facilitate learning about high-performance applications. It also has a front panel covered in LEDs.

The design of this system is built around a the PINE64 SOPINE module, or basically a 64-bit quad-core CPU stuck onto a board that fits in an SODIMM socket. If that sounds like the Raspberry Pi Computer Module, you get a cookie. Unlike the Pi Compute Module, the people behind the SOPINE have created something called a ‘Clusterboard’, or eight vertical SODIMM sockets tied together with a single controller, power supply, and an Ethernet jack. Yes, it’s a board meant for cluster computing.

To this, [Jason] is adding his own twist on a standard, off-the-shelf breakout board. This Clusterboard is mounted to a beautiful aluminum enclosure, and the front panel is loaded up with a whole bunch of almost vintage-looking red LEDs. These LEDs indicate the current load on each bit of the cluster, providing immediate visual feedback on how those computations are going. With the right art — perhaps something in harvest gold, brown, and avocado — this supercomputer would look like it’s right out of the era of beautiful computers. At any rate, it’s a great entry for the Hackaday Prize.

61 thoughts on “Everyone Needs A Personal Supercomputer

  1. For the last time: If it doesn’t make any of the usual lists (Top500, HPCG etc.), it doesn’t even qualify for the term “supercomputer”. “Supercomputer” is a well-defined term, just stop abusing it. This is a toy cluster at best, and there has been an infinite number of systems with this exact design (cheap ARM boards) before.

    1. Where is it defined? Just saying it’s well defined is not enough. You must supply the material to support your argument. Then, I’ll consider what you’re saying.

      1. Isn’t it the other way around? Usually the person/group making the claim is expected to back it up, rather than the expectation being that the claim stands unless someone shoots it down. I mean, that happens (a lot), but it’s skeevy.

        1. Really depends. You should supply supporting evidence for a claim. I asked because when I’ve looked for a definition I could only find articles that stated their view on it with out citing how they had come to this conclusion. I probably agree with Bubul but before I do i’ll like to see this “well-defined” term so I can refer to it.

        2. No here we have a person that have no clue about the use of the term supercomputer in the history of computers making a dumb claim with nothing to back it up.

          Supercomputer is a relatively new term generally starting use with the Cray 1, the idea of supercomputers is a lot older – the fastest computer possible at a given time. It have lately (90’s) changed to the largest computer possible at a given time. It changed from a balls-to-the-wall effort combating physical limits to clustered designs using message passing among more or less standard hardware. More cost effective, sure. But it limits performance for a large subset of problems even though modern systems often have really impressive bisectional bandwidth.

          I miss Seymour Cray, RIP.

          1. The term “Supercomputer” was already used for both IBM and CDC machines before Seymour Cray left to found his own company, and it still refers to the fastest computer possible at all times. The internal design is completely unimportant, the point is that it behaves as if it was a single machine. If you limited the term to a machine with a single processor (like Seymour Cray himself did until he failed with the Cray-2), then several machines which were undoubtedly Supercomputers wouldn’t fit the bill. Everything people ever came up with has been used to build Supercomputers – Symmetric Multiprocessing, Asymmetric Multiprocessing, Accelerators, Offload Engines, I/O controllers, Unified Memory, Non-Unified Memory, cache-coherent, non-cache-coherent, Message Passing, RDMA, you name it. The DCD 6600 was not even an SMP Machine. Also when MPI came into existence in the early 90s, it was mostly just a combination of other standards which had existed since the 80s, and message passing had also been used since and on the Cray-2 (you could couple six machines via a high-speed network).

            Yes, we would all be happier if we could just spawn millions of threads on a cache-coherent NUMA machine, but nobody can scale cache coherency up to that number of CPUs (Cray stopped trying with the T3D and T3E). So we have to do with message passing, and it has served us extremely well.

          2. Replying to myself as this forum is crap.

            Supercomputer was used before – and I wrote that “… relatively new term generally starting use with the Cray 1”.
            Generally have a definition too. The CDC 6600 wasn’t called a supercomputer, the IBM Stretch wasn’t called a supercomputer, the Goodyear STARAN wasn’t called a supercomputer, the Connection Machine CM-1 _was_ called a supercomputer. So it is a relatively new term to describe the fastest type of computers available and it generally started use with the Cray 1 – that doesn’t mean it was never used before of course, never said that.

            (The CM-1 is absolutely catastrophically bad in LINPACK BTW)

            Yes we have gone from single processors to multi-processors. But we also went from maximum performance possible to maximum performance possible for a certain amount of money.
            Cray were no stranger to multiprocessing, one of the features of the CDC 6600 was the peripheral processors that split IO handling to be done in a barrel processor (type of multi threading). He also went for multiprocessors in a later CDC design (the 8600). The Cray X MP had multiple processors, the Cray 3 scaled to IIRC 16 processors in theory.

            The fact remains that the original supercomputer idea have almost completely died. A cluster of computers weren’t thought of as one supercomputer in the past – it was seen as a cluster of computers. That remained even if the system presented as a single system image.

            But the whole thing just reinforces that supercomputer isn’t one thing. It’s something that changes definition with time.
            If quantum computers takes off and new algorithms allows them to crunch some set of data better than classic computers supercomputer may become synonymous with quantum computers. And they would most likely be bad at LINPACK.

          3. “The CDC 6600 wasn’t called a supercomputer, the IBM Stretch wasn’t called a supercomputer, the Goodyear STARAN wasn’t called a supercomputer, the Connection Machine CM-1 _was_ called a supercomputer.”

            What’s your definition for “something was called a Supercomputer”? Because if the definition is “someone wrote that in the news/a magazine” or “the marketing materials said that”, then the first recorded use of the word “Supercomputer” was to describe a Hollerith tabulator shipped to the Columbia University by IBM in 1929, but on the other hand (at least to my knowledge) neither the CM-1 nor the Cray-1 nor the Cray Y-MP were called “Supercomputers” in their own marketing materials or manuals.

            BTW: I have never heard anybody NOT call the CDC 6600 a Supercomputer, you are literally the first person I have ever come across to make that claim.

            “But we also went from maximum performance possible to maximum performance possible for a certain amount of money.”

            That’s a completely ridiculous claim. Do you really think e.g. development of the Cray-1 was not limited by the fact that Cray had to actually make sure he could sell one? One of these machines cost 8,86 million dollars in 1977, which is equal to about 85 million dollars today. That’s still about the limit of what anybody wants to pay. The Earth simulator was about 70 million dollars, Titan cost about 100 million dollars, the two new machines being installed in the US right now (Summit/Sierra) will cost about 150 million dollars each. Money has *always* been the limiting factor in Supercomputer development. And it’s not just the cost of the hardware, for example where do you think the 20 megawatt limit for the Exascale machines comes from?

      2. You could have just looked at the actual word. “Super” comes from Latin “above, better than usual”. The meaning of the word itself is simply “better than the usual computer” or “above the other computers”. There are different approaches to measure the level of excellence, which is why I have referred to the several different ranking systems used in practice, but there is zero doubt that the definition of a Supercomputer is “one of the fastest computers of its time”, and there is also zero doubt this project here is not “better than the usual computer” in any regard.

    2. Did you read the article???? Just curious. Apparently the maker didnt have a cool million laying around to build a “real suoercomputer”. Bottom line, I’d hire a person with curiosity and ingenuity like this any day of the week. I tend to stray from sarcastic nit pickers tjough. Just saying.

    3. No it’s not a well defined term, never have been, never will. It have never been fixed even in some generation of computers.

      What about computers not suitable for LINPACK runs? What about computers doing other problems much faster than any other existing computer, are they not real supercomputers because they aren’t good at executing LINPACK benchmarks?

      Bullshit.

      1. A Supercomputer is simply one of the fastest computers of its time. Period. I specifically wrote “any of the usual lists (Top500, HPCG etc.)” to cover all machines which fit this definition. This includes the Top500, HPCG, Green500, Graph500, GreenGraph500, IO500 etc. This build here can not keep up with a 5 year old standard desktop in any metric, so there is definitely nothing “Super” about it in any regard.

        You can chime in on the old “But what about all the machines which can’t run the LINPACK” etc. tune some media outlets have been repeating for years, but that’s getting really old really fast.

    4. Supercomputer is used somewhat tongue-in-cheek (combining super computer with personal computer) but I believe it’s also appropriate because the purpose of the project is to apply the *architecture* of contemporary supercomputers at a personal scale. I’ve written in-depth elsewhere about why I think this is important so I won’t belabor it here, but it’s worth a read if you want to understand why I use the term:

      https://jjg.2soc.net/2017/12/13/why-personal-supercomputers/

    1. Puzzle this: if you were to be sent back to 1965 with a machine of your choice or construction, what would you take along? Including datasheets, programming references and other material, with the understanding that other people would find use for your machine.

        1. Wouldn’t work, because of the time loop you’d create. Preventing crimes prevents them from showing up in your list, so you’d become a harbinger of doom instead of a hero.

        1. That would be just a black box to the people, who would be stuck with whatever software you cared to bring along. The point is that they’d actually be able to -use- it rather than just use it.

          Like, in a secret vault under the NSA office there’s a Cray supercomputer shell that hides your laptop from prying eyes, pretending to be the world’s most powerful computer crunching away at nuclear secrets. Doesn’t work if the people have no idea how to program it.

  2. ” For the last twenty years, people have been building their own ‘supercomputers’ in their homes, and now we have cheap ARM single board computers to play with. What does this mean? Personal supercomputers. ”

    Wouldn’t that be a GPU cluster?

  3. Altair? I had 131 vintage computers at the height of my collection. Not one blinkenlight. I need an affordable (cheap) blinkenlight, just for the aesthetic. Any suggestions?

    P.S. Would HAD please include £/$ value for builds like the above?

    Thank you and keep up the good work.

    1. If you want just blinky lighty and nothing real behind it, eBay has some, 100 pieces of blinking LEDs for $3.50 USD. Just wire them to 5v source and resistors and put it up somewhere.

    1. Bingo! Finally a real comment! The reason this isn’t the architecture we use is that we Dont have a parallel processing non opportunistic OPERATING SYSTEM! All multi process oses have a primary controlling process and slave off sub processors. To take advantage of parallel each processor must be a part process. CORBA was an early attempt at this and may well deserve a re isit. It actually did provide parallel system resource and processing distribution over the net, closer than anytjing in the early 90s anyway. Ran on Microsoft, IBM, Unix, Dec, etc. Orbs communicated through internet. Cool concept.

      1. Yeah? And? 7 slots, each with 4 cores, each running an baremetal or maybe a thin RTOS, stream out I2S, and mix the channels in the analogue domain. Distributing MIDI data to all in a timely fashion would be relatively trivial, and 666MHz of modern ARM DSP is more than enough performance. With 28 cores, each could run 1 voice fixed function, and still have more voices than the majority of commercial synths.

  4. I agree, not really a supercomputer unless you’re comparing it to a Babbage Difference Engine or a 1980s watch calculator or something. Still, cluster operation is not possible in stock desktops today without additional hardware and/or software to make it so, and cluster construction is a mystery to most computer operators, including me and I’ve been working with computers from micros to mainframes since the 1970s. Kudos to the designer of this one for making it happen in a cool looking Altair-esque case.

  5. I was just thinking:”Wow, with the interconnect being the limitting factor very often, something like this with a custom-board-interconnect, might do wonders.”
    And then i read that its just gigabit without the connectors and cables.

  6. Wow! I after reading the top third of the comments section, I had to skim the rest. So let me get this straight; 95% of the comments posted had nothing to do with the project itself? And I see that about 50% of the comments was a few particular members arguing about the term “supercomputer”. What have we evolved species beings become (not become??)??

    I’m thrilled none of my projects have ever made it to this forum. I can’t even begin to imagine the comments that could be made about them.

    Hey Jason! I like your project. It shows creative ingenuity. Kudos!

    Peace and blessings.

Leave a Reply to BubulCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.