Most readers of Hackaday will be well aware of the current shortages of semiconductors and especially GPUs. Whether you’re planning to build a state-of-the art gaming PC, a mining rig to convert your kilowatt-hours into cryptocoins, or are simply experimenting with machine-learning AI, you should be prepared to shell out quite a bit more money for a proper GPU than in the good old days.
Bargains are still to be had in the second-hand market though. [Devon Bray] chanced upon a pair of Nvidia Tesla K80 cards, which are not suitable for gaming and no longer cost-effective for mining crypto, but ideal for [Devon]’s machine-learning calculations. However, he had to make a modification to enable proper thermal management, as these cards were not designed to be used in regular desktop PCs.
The reason for this is that many professional-grade GPU accelerators are installed in rack-mounted server cases, and are therefore equipped with heat sinks but no fans: the case is meant to provide a forced air flow to carry away the card’s heat. Simply installing the cards into a desktop PC case would cause them to overheat, as passive cooling will not get rid of the 300 W that each card pumps out on full load.
[Devon] decided to make a proper thermal solution by 3D printing a mount that carries three fans along with an air duct that snaps onto the GPU card. In order to prevent unnecessary fan noise, he added a thermal control system consisting of a Raspberry Pi Pico, a handful of MOSFETs, and a thermistor to sense the GPU’s temperature, so the fans are only driven when the card is getting hot. The Pi Pico is of course way more powerful than needed for such a simple task, but allowed [Devon] to program it in MicroPython, using more advanced programming techniques than would be possible on, say, an Arduino.
We love the elegant design of the fan duct, which enables two of these huge cards to fit onto a motherboard side-by-side. We’ve seen people working on the opposite problem of fitting large fans into small cases, as well as designs that discard the whole idea of using fans for cooling.
Considering he’s stacking 2 card side by side, instead of each card having an individual shroud with 40mm fans, he could probably have pushed more air by creating a single intake shroud with 2 (potentially 3?) 80mm fans, with 2 exhaust ducts (1 for each card obviously). It would also probably be quite a bit less noisy. Also less mosfets = less heat I suppose.
Or use blowers instead of fans, and get ones with opposing air intakes (so each one draws air from opposing sides).
Heatpipes would also be a relatively straightforward option: not too hard to cruft them on.
the moment you use “Blower” and “noise control” in the same statement, you get laughed out of the room
Your comment blows. It is a type of fan. Not a noise measure.
Have you heard these server fans? They run ridiculously fast to get sufficient airflow for their size and scream as a result. Literally any larger fan will do a better job at a fraction of the noise, even if it’s a blower.
Agreed. Small fans are only really useful when you can’t fit a bigger fan. More diameter means much less noise for the same airflow).
Still a really cool project, but I’d love to see V2.
I have a number of now unused K2 video cards at the office, not only do they not have a fan, but they were sold with left to right OR right to left airflow, so each card had 2 part numbers and had slightly different fin arrangements for a case. The older cards like this can be used for such purposes but its my understanding that the newer cards all take a software license to use. So even if you get them free you still have to fork over $$$ to Nvidia.
(from the Nvidia grid licensing guide)
“In GPU pass-through mode on Windows, or in a bare-metal deployment on Windows or Linux, a physical GPU requires a vWS license” (right now thats around $250 per year, or $450 perpetual)
I am not sure if there is software level enforcement in bare-metal installs like this or if its still not enforced, my vmware system its totally enforced and you can’t even download the needed drivers for the card without going to the licensing portal.
So FYI
Hey! I’ve been able to use these cards no problem on a proxmox virtual host. I’ve passed whole cards into VMs and I’ve also passed one single logical GPU in at a time (there are two percard). I’ve been using regular NVIDIA drivers and tensorflow to work with them, no need for specific liscensing. Maybe the tesla models are old enough to where that isn’t a concern?
We have a tonne passively cooled accelerators out there in the field. The key is ensuring they have enough static pressure to really stuff the air down their long but super skinny heat sinks. Personally not a big fan (har) of their approach as so few ‘gpu’ spec server chassis really to plenums and decent fans very well. Most of the air typically finds another way out of the chassis. If you can fit a decent blower and plenum into the space you have it’s definitely the way to go. Theres a reason those 1u server fans are so damn loud :p
Hi,
I have a GTX 980 blower fan GPU. How do I get something I like that to help cool down my GPU.