Building A Cheap Kubernetes Cluster From Old Laptops

Cluster computing is a popular choice for heavy duty computing applications. At the base level, there are hobby clusters often built with Raspberry Pis, while the industrial level involves data centers crammed with servers running at full tilt. [greg] wanted something cheap, but with x86 support – so set about building a rig his own way.

The ingenious part of [greg]’s build comes in the source computers. He identified that replacement laptop motherboards were a great source of computing power on the cheap, with a board packing an i7 CPU with 16GB of RAM available from eBay for around £100, and with i5 models being even cheaper. With four laptop motherboards on hand, he set about stacking them in a case, powering them, and hooking them up with the bare minimum required to get them working. With everything wrapped up in an old server case with some 3D printed parts to hold it all together, he was able to get a 4-node Kubernetes cluster up and running for an absolute bargain price.

We haven’t seen spare laptop motherboards used in such a way before, but we could definitely see this becoming more of a thing going forward. The possibilities of a crate full of deprecated motherboards are enticing for those building clusters on the cheap. Of course, more nodes is more better, so check out this 120 Pi cluster to satiate your thirst for raw FLOPs.

25 thoughts on “Building A Cheap Kubernetes Cluster From Old Laptops

    1. Pretty doubtful. Blade servers are high density and thus require good cooling, are very loud, and most don’t run on 120VAC.

      Smart people buy 2U machines for home server labs, because they are *substantially* quieter.

      1. Welcome to todays edition of Winston Churchill quotes that don’t quite fit, today we’re going with the double header “But in the morning I shall be sober” and “If I were your husband I’d drink it.”

  1. Very neat, a great use of otherwise rubbish tech. Though I’d have done something with the stock coolers, larger PC heatsink could be fitted and would be cheap, quieter, and probably allow for faster sustained clock speeds – Not like it needs to be laptop thin any more. Can see why you would do it this way though, so much less effort, and if you don’t have to sit next to it, or are putting it in a rack full of noisy servers anyway what’s the point…

    Wonder what the lifetime cost of something like this is vs the Pi solution. Very different calculation per watt between x86/AMD64 and ARM too so possibly significant operational cost difference too.

    They clearly have somewhat different best usecase so the laptops could well be better for your needs. But your 4 laptop boards probably have a power draw at idle and maxed out that is many times that of the Pi’s in similar situations.. And the networking gear will be similar power budget most likely, even if you have 4x the number of pi’s to laptops, small switches all seem pretty similar one 16 port doesn’t look to ever consume much more than one 4 or 8 port..

    So as a learning exercise, test environment, or for the right type of workload playing with 3 times more nodes at the same power draw because they are Pi’s is probably better, and need not be more expensive (not like you actually need the 8gb pi 4 model in this situation – though bigger is always better..).

    1. For pi vs x86 the depending on workload thing should always be “depending on if workload actually contains any work” because if that is false, pis look great. If you translate the whole argument to x86, it looks like “My Athlon XP-M uses a quarter the power of your 8 core Threadripper” without the realisation that you need 16 of them to equal performance in a simple integer task that’s almost “per tick” clock reliant, or that for real work the raw number crunching is 100 times better. Sure, if you need a CPU to sit around and wait for something to happen, you want one that does that as efficiently as possible, but being capable of running a GUI as well as a 7mhz Amiga did in 1985 does not make something a powerhouse.

      1. A Pi, even a Pi mark 1 is vastly better at computation per watt than most x86-64 because Arm is just inherently more energy efficient (at least for now), and a Pi 4 isn’t a slouch in computation performance either.. So its not as clear cut as just the latest Threadripper is faster, lets have lots of them! You have the cooling, and other support hardware requirements, potential bottlenecks* and overall efficiency to consider too.

        Also what is the goal of this system? As that really changes if more nodes or less higher spec nodes is the best option – its highly varied as to what will be the ‘best’, not a one size fits all.

        *Bottlenecks vary alot based on the type of workload – perhaps this workload will saturate the RAM/Disk on the AMD64, so more nodes with similar RAM and disk speed can massively outperform it despite the lower CPU clocks if the task subdivides well. Or perhaps this task is very CPU bound, and can’t easily be split among threads in which case you won’t see real gains running the latest greatest threadrippers over the previous generation – all the single core performance and clock speeds work out rather damn close.

        Worth noting though that AMD’s offerings are astonishingly good per watt to compared to Intel’s latest and greatest, and oh how I wish I could afford to replace my now rather old workstation, one cpu pulling just 65w TDP and having more cores and faster single/multi core performance than my xeons…

      2. As it seems a previous comment is stuck in a nether world I’ll put the key salient point here.

        Its not that simple, a Pi 4 is a computational beast for its cost and size, and all Pi’s being Arm are better at calculations per watt than most all x86 (Arm is just more efficient).. Also worth mentioning the broadcom graphics in a Pi is actually rather potent as well, though less convenient to utilise I’d suspect…

        Will the Pi’s be better for your use case, depends largely on what you are doing, bottlenecks for whatever task you wish to perform can really shift what is best (for instance RAM and disks have limited access speed), and you might need more cores over faster cores etc.

        1. Like I said, great for waiting for something to happen. Then if you’re seeing less than 60% use waiting for something, it’s prolly time to go down from a Pi 4 to a Pi 2, coz the 4 idles at what 2 uses at full load.

          But performance per watt, you’ve been seeing too many thirsty desktop chip vs Pi articles, the 5 generation old 6600U used here does ~160 Linpack double precision Mflop per watt, the Pi 4 does about 70.

          1. If you specify some under clocking you can get the 4 down quite a bit at idle.. I’ve not really pushed hard at that (yet), just a mild tinker for curiosity sake while cranking the poor things max clock up to at least 10.8 – certainly not pushed it that near its limits in either direction, but well over what it ships with max clock and enough the old fanless Aluminium heatsink can’t keep it under 60 when its working hard anymore (which seemed a good point to stop – I like my electronics to last and keeping it at the 60-70 range seems like a good balance), though it still takes quite a while to heat soak up into that range…

            I’ve not seen any x86 including laptop chips that can match the computation per watt of the Pi’s I’ve tested – note I didn’t say they didn’t exist though, just that Arm tends to be better and Pi’s are good. As with everything computers the devil is in those little details that can end up making rather larger than expected differences.

            Also even if you are never waiting on anything else and entirely CPU bound in your application you might find the larger cluster of Pi’s better performant for the same price – more nodes, so more threads can equal faster results even when the threads are slower.

  2. “Cluster computing is a popular choice for heavy duty computing applications. At the base level, there are hobby clusters often built with Raspberry Pis.”

    >heavy duty computing applications
    >Raspberry Pi

    Not that “because it’s a neat conversation piece” isn’t perfectly sufficient justification (this is Hackaday after all) and especially if you already have a bunch of em lying around, but I will never understand why so many people insist on going out of their way to build k8s (and other software) clusters out of Raspberry Pi’s.

    It would be cheaper, more powerful, and closer to an actual deployment to buy an old x86 server with plenty of cores and RAM and virtualize the cluster instead to learn the software and the architecture, or do something similar to this article for that matter. Pis just seem so ridiculously unsuitable to the task. Heck, for the $35 each pi costs you can get 7+ months of VPS time on AWS, DigitalOcean, or any of the other billion hosts if you’re strained for space.

    And since the Pis aren’t x86 either, you can’t adequately test stuff like PXE boot for provisioning either like you would in a “real” cluster, but VMs are more than capable of that task as well.

    I somewhat get the physicality of it, but only barely. Setting up the Pis is basically: image an SD card, add USB power cable and ethernet back to a switch. I feel like there’s very little to learn there and nothing that wouldn’t be matched by plugging an old server into the same switch as your laptop or desktop and then installing an OS on it.

    1. Raspberry pi 4 does indeed support network booting.

      8 gigs of RAM and 8 cores is plenty of horsepower for a compute node.

      For high performance applications you can hook up a M.3 drive vis USB and boot from it.

      Have you Really explored all of the boot options in the latest Raspberry Pi 4 ROM?

  3. Fully approve of this. I personally use x220/230 Lenovo motherboards around the house as my computing nodes (NAS, TV movie players, cameras).
    In related news: “Cryptocurrency Miners Gobble Up NVIDIA’s GeForce RTX 30 Laptops, Set up Massive Ethereum GPU Mining Farm in China” https://wccftech.com/gpu-miners-now-rushing-after-nvidia-geforce-rtx-30-laptops-ethereum-cryptocurrency-mining-farm/

    tldr: up to $10 per day mining with a RTX 3060 laptop :(

    1. Don’t forget to make good use of VAAPI on these ;) With GPU hardware acceleration for media decoding/encoding/transcoding, especially for h264 it’s rather fast, even with this old Intel GPU.
      I do my timelapse video encodings on a X230 motheboard : each month encode a batch of over 10.000 jpg into a 60fps h264 mp4.
      But for my 24/24 7/7 media streamer (Jellyfin + TVheadend + Squeezebox server) I do prefer to use an Intel Compute Stick with an Atom x5-Z8330 CPU. Outdated technology, limited CPU but quad-core, very limited RAM with only 2G non-upgradable, but VAAPI capable and only 6W TDP (compared to the 45W TDP of the X230). Add 1G of swap into zram and a USB3-Ethernet adapter, and it’s good enough for a little server with some docker containers that are frugal on RAM.

  4. Scavenging laptop parts for a high intensity cluster seems on it face a flop. (Mega Flop)
    Most if not all laptops are de-tuned due to heat/current draw. (One and the same)
    With the surplus of salvage workstations here in Los Angeles, it’s difficult to imagine putting time and effort into laptop clusters. Hobby? Sure. My NAS has Dual Xeon 6 core CPUs, 64 gig ram, HP Z800.
    Never cries at all, no matter how much I abuse it.

  5. I did something similar in 2019 for my 2nd generation HomeLab.

    I happened to be on Craigslist looking for broken laptops to salvage/repair.
    (I repair them and give them away usually. Or sell them cheap to finance more buying/repairing.)

    As usual, there were more than a few laptops that could be fixed. But in this case 5 of them were bad candidates for giving away after they were fixed. So, I stripped them down and decided they would be perfect for learning how to properly do a Proxmox High availability cluster.

    $200 for an Ebay lot of SODIMMs, lot of 64GiB mSATA drives, and a 4u case was enough to get it up and running.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.