A Pi Cluster To Hang In Your Stocking With Care

It’s that time of year again, with the holidays fast approaching friends and family will be hounding you about what trinkets and shiny baubles they can pretend to surprise you with. Unfortunately there’s no person harder to shop for than the maker or hacker: if we want it, we’ve probably already built the thing. Or at least gotten it out of somebody else’s trash.

But if they absolutely, positively, simply have to buy you something that’s commercially made, then you could do worse than pointing them to this very slick Raspberry Pi cluster backplane from [miniNodes]. With the ability to support up to five of the often overlooked Pi Compute Modules, this little device will let you bring a punchy little ARM cluster online without having to build something from scratch.

The Compute Module is perfectly suited for clustering applications like this due to its much smaller size compared to the full-size Raspberry Pi, but we don’t see it get used that often because it needs to be jacked into an appropriate SODIMM connector. This makes it effectively useless for prototyping and quickly thrown together hacks (I.E. everything most people use the Pi for), and really only suitable for finished products and industrial applications. It’s really the line in the sand between playing around with the Pi and putting it to real work.

[miniNodes] calls their handy little device the Carrier Board, and beyond the obvious five SODIMM slots for the Pis to live in, there’s also an integrated gigabit switch with an uplink port to get them all connected to the network. The board powers all of the nodes through a single barrel connector on the side opposite the Ethernet jack, leaving behind the masses of spider’s web of USB cables we usually see with Pi clusters.

The board doesn’t come cheap at $259 USD, plus the five Pi Compute Modules which will set you back another $150. But for the ticket price you’ll have a 20 core ARM cluster with 5 GB of RAM and 20 GB of flash storage in a 200 x 100 millimeter (8 x 4 inch) footprint, with an energy consumption of under 20 watts when running at wide open throttle. This could be an excellent choice for mobile applications, or if you just want to experiment with parallel processing on a desktop-sized device.

Amazon is ready for the coming ARM server revolution, are you? Between products like this and the many DIY ARM clusters we’ve seen over the years, it looks like we’re going to be dragging the plucky architecture kicking and screaming into the world of high performance computing.

[Thanks to Baldpower for the tip.]

42 thoughts on “A Pi Cluster To Hang In Your Stocking With Care

  1. I’m looking also into these compute modules, but am a bit concerned about the thermal performance. A lot of normal pi’s are retrofitted with a heatsink. I think I can glue a heatsink on these modules, but my intended use is automotive. Prolly it will vibrate off. Dunno if the compute module will also stays in its socket, but I guess it is locked in.

    1. If I was really worried about the thermals I’d grab a piece of aluminum heatsink and cut it to shape so I can mount it across the two mounting holes on the compute module, maybe even have it milled so that any larger components, like the smd capacitors have no clearance issues (or just use some kapton tape if it can still sit flush on the CPU package).

      I might then use long enough threaded screws, washers and what not that I’d have a nice, secure, and sturdy way to mount the compute module onto the board that the SODIMM connector is on. If it was a permanent thing I might also take the time to coat everything but the heatsink with some kind of suitably dielectric epoxy, cars get a bit weird with moisture build up.

    2. Be careful with SODIMMs in automotive environment. I have had some bad experience with spring-loaded contacts interfacing directly to a bare ENIG of a PCB in an automotive environment. The pressure the connector puts on the gold deforms it and corrosion starts in a damp and temperature cycling environment.
      There is a difference between ENIG gold on a PCB and hard gold on connectors. Just because they look the same doesn’t mean that they have the same properties – lesson learned.

      1. thanks!

        I wish there was a more bare board with just some easier way of connecting the I/o (including the USB). The Pi zero is a single sided PCB, which can be directly soldered to another PCB, but the USB aren’t on the I/O header. It can be done with some fancy soldering to the pads but I like a more cleaner way of integrating a Pi (or similar) to my own PCB

    1. That was a db19? Or similar. These sockets are SODIMM and extremely common as they are used for memory modules in laptops and such. The high availability of the parts is why the pick foundation chose them

      1. That was the Big Mess O Wires guy who got the D-Sub 19 (there is no official designation) Apple external floppy connectors reproduced. Before he shipped some out to the other companies that went in on the deal, he had the entire world’s supply of new Apple floppy connectors on his back porch.

        Not often that one person has ALL of a commercially produced item that’s made in a large batch.

        1. I just.. I cannot get over how absurdly their site works, so to see a products details I have to add it to my cart, then an eye icon appears so I can go view my cart, then I can click the item in my cart to finally see the product details from there. Okay…

      1. Dell M1000e chassis which takes up to 16 blades x dual processor, multi core xeon, can comfortably be bought second hand for that sort of money. Infiniband backplane. Redundant everything. It’ll be noisier, is considerably larger, and doubles as a space heater. But a much better platform than a Pi cluster for dabbling in parallel processing. (I have tried both).

      2. I think with the hardware they are using (Gigabit) along with the PCB (6-10 layer) and low volume, I bet that price is fair. It makes wonder what it would be like to buy low production volume motherboards or CPU’s……$$$$$$

        1. lol gigabit. on a pi. yaaaaaaayyyy

          This is just expensive ethernet switch and power distro board. If you’re a tinkerer, save the money and just buy a stack of Pis (and then you get the added “benefit” of USB ports and some GPIO). I can’t think of a situation where this is a good application. You can buy (USD) 8x Pi 3 B + boards for less than the cost of the backplane alone. If it broke out all of the GPIO and USB…I could maybe see a case where having a high density compute board would come in handy (robots with lots of subsystems, etc)…but this misses the mark for a hobby cluster.

          Change my mind?

  2. often overlooked Pi Compute Modules
    [miniNodes] calls their handy little device the Carrier Board
    The board doesn’t come cheap at $259 USD,

    Gee I wonder why.
    Meanwhile, how many pi’s plus eth switch & power can you cram into a 1U chassis?

    1. There was an article here some time back where Sandia or one of the other nuke labs built a largish dev/test HPC cluster out of Pis that fit in a 4U chassis.

      Thought about doing the same where I work, but since my devops people keep turning my devtest clusters into prod clusters, that was out of the question…

  3. Curiousity and probably silly question:
    How would this go running a desktop?
    ie is there a Linux desktop designed to run on a cluster?
    Presumably none of the applications generally used in desktop mode are optimised for this.

  4. What are you supposed to do with this? I do a fair bit of custom industrial automation software and hardware, and I’m struggling to think of a industrial or remote environment application that would be useful for, let alone compatible with.

    1. Docker, lxc, kubernetes…
      The target for stuff like this is containers or parallel processing.

      It is kind of a catch-22 though;
      x86 would give higher performance, so this is either designed for containers for lots of small applications or mobile/field processing where low power and heat are priority.

    2. I’m thinking, at least from an HPC perspective, that testing your distributed boot loader and software loadout scripts in a way that doesn’t hog up your real (i.e. expensive) compute resources would be an ideal purpose for a product like this.

      1. Not much of a test though is it? You could do the same with a few vms. Probably closer in both performance and architecture terms. You need to find a task that is parallel, has some io related requirements that work in the pi s favour, but isn’t so real time that kernel jitter gets in the way? Or perhaps the low power aspect of the killer use case angle?

        1. I could (and have) done this with VMs. But again, that means I’m dropping hypervisors on equipment that I could be doing real HPC chores with. Plus, it is never just a “few VMs” that I have to demonstrate or test — part of the testing regime is to get several hundred or thousand “somethings” (whether they be pizza-boxes, VMs, or Pis) booted and answering job control within 5-10 minutes of startup.

        2. Those who remember Perk mining have seen an example of an application where VMs don’t work well. Basically, the mining software uses native ARM code so it either doesn’t work on x86 systems or loses a lot of performance to emulation overhead. Clusters of cheap smartphones ended up being the best way to mine Perk.

  5. NASA might be interested in this. A multi board computer was launched to ISS a while back. The original intention was simply to function test it and see how well (or not) the hardware handled cosmic ray hits. Well, it’s had some parts fail from that. Some of the CPUs and SSDs have gone kaput. It was supposed to come back with the two people the failed Soyuz launch crew were to replace. But since it’s going to be left on the station for a while, the manufacturer and NASA have decided to employ the computer to do Real Work (TM). Monitoring how it runs while taking more hits should give them more to work with on dissecting it to see what failed and how.

  6. Cluster ?! what is the bandwidth between modules ?
    No more than the 80Mb/s from the ethernet over usb 2 !
    I do not call this a cluster but a very expensive box for a few low functionalities RPis

Leave a Reply to Gregg EshelmanCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.