3D Printer Upcycles Computer Case To DAS

Storage technologies are a bit of an alphabet soup, with NAS, SAN, and DAS systems being offered. That’s Network Attached Storage, Storage Area Network, and Direct Attached Storage. The DAS is the simplest, just physical drives attached to a machine, usually in a separate box custom made for the purpose. That physical box can be expensive, particularly if you live on an island like [Nicholas Sherlock], where shipping costs can be prohibitively high. So what does a resourceful hacker do, particularly one who has a 3d printer? Naturally, he designs a conversion kit and turns an available computer case into a DAS.

There’s some clever work here, starting with the baseplate that re-uses the ATX screw pattern. Bolted to that plate are up to four drive racks, each holding up to four drives. So all told, you can squeeze 16 drives into a handy case. The next clever bit is the Voronoi pattern, an organic structure that maximizes airflow and structural strength with minimal filament. A pair of 140mm fans hold the drives at a steady 32C in testing, but that’s warm enough that ABS is the way to go for the build. Keep in mind that the use of a computer case also provides a handy place to put the power supply, which uses the pin-short trick to provide power.

Data is handled with 4 to 1 SATA to SAS breakout cables, internal to external SAS converters, and an external SAS cable to the host PC. Of course, you’ll need a SAS card in your host PC to handle the connections. Thankfully you can pick those up on ebay for $20 USD and up.

If this looks good, maybe check out some other takes on this concept!

29 thoughts on “3D Printer Upcycles Computer Case To DAS

      1. It varies.

        It used to be very common for hard drives to be designed to be mounted a certain way. Usually with those drives it was ok to mount them sideways. It had to be as people would often set their computers on their sides anyway to sort of convert between tower/desktop as space allowed. Put them upside down however and the bearings inside would wear out much quicker than if they were positioned the right way.

        I remember a lot of cheap computers coming with the hard drives mounted upside down and if left that way they would fail in a year or two. Often this was right after the PCs warranty was up making me think this was no accident. I used to go through a bit of effort drilling new holes and cutting pieces out with a dremel in order to get the brackets inside those crappy boxes to accept a hard drive the right way when my relatives fell victim to this.

        At some point manufacturers started building more and more of them to work in any orientation. I don’t know if they are all upside-down safe now or just most of them. And of course anything solid state should not care which way is down.

        Just to be safe… I wouldn’t design a bracket to hold them this way.

        1. Since the heads stick to the platters from both sides, same as the coil is between a symmetrical permanent magnet, and the spindle is fixed fron both sides, I cant see much of assymetry and reason the upside down position is anyhow worse.

    1. In the lower left rack of the case, up to 3 PCI brackets are interleaved in the gaps between the disks for the entrance of the SAS cables, and this requirement fixes the position of the disks relative to the PCI slots.

      Disks are vertically asymmetrical, their mounting screws are much closer to their base than their top. So in order to flip the drives the right way up while keeping the drives in the same vertical spot, their mounting rails need to shift downwards in the case by 13.40mm. But this requires the rail to shift beyond the bottom of the motherboard’s outline, it ends up requiring the drive rack to extend about 5mm below that outline.

      This breaks compatibility with ATX-spec cases in general and my case in particular, since there just isn’t that much room below the motherboard tray.

    1. Hmm….I’m guessing you dont see the use case for this? or a NAS (or similar) in general?

      Not trying to be rude but come on, the guy has a bunch of drives he wants access too, who cares if some of them are ‘old’? these are the drives he has & wants to read? whys that bad?

      Besides that point, IDE / platter drives are still in use everywhere around you even if you dont realize it, not everything needs to be on a ~6GB/s m2 drive ya know. Do you think the computer in the office of your local supermarket is an alienware gaming pc or a 20 year old yellowed ‘old reliable’ monster?

      If you dont understand the reason for something its better to ask why instead of being rude about it.

      1. Umm, no, the point is there is no IDE interface provided in the build, just SATA.
        I actually read the project page.

        I have no disregard for IDE, I still use a lot of 1 and 2 Tb IDEs for storage, and a lot of older ones that are kept in boxes for archival purposes. You never know when you might need some old drivers for oddball hardware that can be operated in legacy mode or revisioned for a newer OS.

    2. Yup, I only bought 5 drives to put into the DAS to start with, so for physical load testing (and for the pics) I filled up the rest of the slots with old disks I dredged out of the basement, lol

  1. You might want to use drives rated for use in an enclosure with other drives. The vibrations will cause errors. Drives will have a rating for 6 8 10 or unlimited.

    This is neat for adding 4 drives to the cheap aquarium boutique cases that aren’t set up for hard drives, yet have the room.

    Just be aware of your disk’s multi-drive rating

    1. I’ve been doing datacenter-adjacent jobs for a couple of decades, and this is the first time I’ve heard about multi-drive rating. In fact, this post is one of only two that come up when searching Google for “multi drive rating” (except for sales pages for some single drive external enclosure).

      Are you sure this isn’t something relevant only to ’70s and ’80s era 8″+ Winchester drives?

      1. This is something that is still needs to be taken into consideration. Bryan Cantrill has a video of Brendan Gregg yelling at a server in a data center and they show the latency spikes that occur from the extra vibrations it causes.

        https://www.youtube.com/watch?v=tDacjrSCeq4

        I think Western Digital has ratings for how many drives can go in an enclosure. Something like up to 8 for WD Red drives. When you get to SAS drives, they are usually rated for more. I’ve seen some listed as unlimited.

      2. Seagate differentiates their IronWolf and IronWolf Pro drives by how many bays in a chassis they are rated for (among a few other things). I do not know for sure if it makes a big difference but Seagate cares enough to put it in the datasheets. Pro drives generally rated for 24+ applications and non-pro for 8 or less generally. I think other vendors have similar specifications for their NAS focused lines of drives.

      3. The first time I found that rating was when WD introduce differentiation between WD Red and Red PRO, suggesting to use the first ones for home and SOHO enclosures for up to eight drives and the other ones for bigger cases. When Seagate introduced the Ironwolf series they made the same differentiation between non-Pro and Pro.
        It could be argued that the vibration transfer/damping can have different profiles in a professional enclosure with steel caddies, vs amateur enclosure with plastic caddies, vs Cooler Master (or any other) system for screw-less disk mounting, vs fixing the drives with screws to a case with or without silicone grommets and/or fiber washers.
        Each of these options gives a slightly different sound profile, so I would imagine that there is also a different spectrum of vibrations.

    2. New feature!
      Piezo pickups mounted to each drive with noise cancelling circuit output to transducer.

      Oxygen free wires with gold plated, magnetic crystal shielded terminals recommended.

  2. i built a box years ago to hold all my HDDs. i guess it was “NAS”. i had about half a dozen drives between 40GB and 300GB and i thought it would be handy to access them all — all together it was a TB! my physical construction was much less impressive than this one, i just set them on dowel rods with rubber bands to keep them from moving too much.

    in the short term, it wasn’t very useful because the MB i picked for it had a bizarrely low bandwidth between its northbridge and southbridge, much less than the original SATA of the era. it couldn’t even send 100Mbit over its ethernet!

    but very quickly it came to pass that I could buy a 1TB drive for $100-$200 and i lost interest. and that’s why i’m surprised at this project…if i really needed more than like 8TB i would probably expect to spend so much money that buying a nice case wouldn’t seem such a big deal. and if i was just doing it cheap, golly gee 8TB is a lot of memory.

    1. In my case the math worked out a little bit like this. Start with a PC with 5 drive bays and fill it full of the best $/TB disks, say 1TB each. Then when you run out of room, replace all the disks with new ones twice their size. Repeat 3 times.

      Now you’ve got 40TB of disks in your computer, but you’ve got 5+10+20=35TB sitting uselessly on your shelf. If only you had more drive bays you could use those disks to nearly double your available capacity.

    2. Really doesn’t go that far if you do anything vaugely hdd consuming, though for many people 32gb is likely excessive – note I am not including the huge bloat is Windoze in that allocation just their personal local data needs…

      For instance even off my very very obsolete DSLR the raw photos are over 1MB and a short video in its off the camera compression level is a few hundred MB, for a modern camera with something over 4x the pixel count that is able to record 4K, maybe 8K and at higher frame-rates…

      To compile operating system images for your phone or mod games can easily be 300GB+ per project, even when the compressed finished result is maybe even as low as 8. And these days far too many games are over 100 as shipped…

    3. And if you need 80-100TB redundant storage you do what? There cases where you need huge storage and 8TB doesn’t fit in that. The disks in my storage are 14TB each… And the next available 16TB was double in price. What you prefer – single 16 or 2*14TB?
      And when you’ve got low latency requirements (running virtual machines on them) then even raid 5/6 doesn’t play very well of the time required to calculate and write the parities. Other than the time required the host to do the math is just a fraction of the problem. Having to write the metadata and having to perform another seek to get there to write it is an absolute performance killer. So as always it is price-performance-reliability. Pick as much your (boss) pocket candle :) . So ending up with raid 10 will effectively doubles the disks needed and more than doubles the cost… But you’ve got all of the above.

    4. One of the problems is that the cases are not really available. What I mean, sure you can buy a rack case for 12, 16, 24 or 30 drives. It can cost substantially even used, but that’s not the problem either. The problem is that those cases are 19″ wide and deep as needed to put disks in front and the motherboard behind with noisy fans in-between. Datacenter equipment is noisy because noise usually is not the problem.
      For other cases, consumer cases, you usually cannot find a case capable of holding more than ten drives. So, this is at least a nice idea.

  3. If I was going to mount my drives in a plastic cage I would at least run a ground ribbon across the screws of each of them. Give the static electricity and RFI somewhere else to go.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.