10-drive Microserver Is The Clown Car Of The Computer Case World

ten-drive-itx-clown-car

[Coke Effekt] wanted to push his server’s storage limits to a higher level by combining ten 3 TB drives. But he’s not interested in transitioning to a larger case in order to facilitate the extra hardware. It only took a bit of hacking to fit all the storage in a mini-ITX case.

His first step was to make a digital model of his custom drive mount. This uses two 3D printed cages which will each hold five drives mounted vertically. To keep things cool the two cages are bolted to a 140mm fan. The connections to the motherboard also present some issues. He uses a two-port SATA card which plays nicely with port multipliers. Those multiplier boards can be seen on the bottom of the image above. The boards are mounted using another 3D printed bracket. Each breaks out one of the SATA ports into five connections for the drives.

[Thanks Pat]

55 thoughts on “10-drive Microserver Is The Clown Car Of The Computer Case World

  1. Why not buy a server case? I bet the $90 for an entry level server case (which will have room for 10 drives including far superior cooling) is cheaper than his solution.

    1. Because he didn’t want to transition to a bigger case, as stated in the second sentence of the article, he wanted to stick with his mini-ITX case (which also looks MUCH better than any entry level “server case”… Also, it saves him $90 and the annoyance of moving everything over to a new case, not to mention a “server case” would need a rack to be mounted in. So how is that cheaper than his solution? This didn’t cost him a penny as he already had all the materials needed…

      1. Except that here he had to pay for port multipliers which are like $50 each, then plastic for the 3D printer, lots of time, etc. And IMO one 140mm fan isn’t adequate for 10 drives at full load. Ideally, you’d also have staggered drive spinup and hotswap.

        Either ways, I’m pretty happy with my Norco RPC-4020 and its 20 drive bays (I’ve improved the cooling though).

          1. A lot of mid tier server cases come with their own multipliers built in.
            The entire drive cage has a pcp mounted on it that mulitplexes one sata connection into 4 or more drives instead of you having to order those separately.

    1. Did you even bother reading the original thread linked in the article? The case has perforations at the bottom, the HDD cage is on standoffs and there is a 140mm fan blowing upwards sandwiched between both cages… So cooling will be more than adequate, and probably better than what you get out of most cases out there.

    1. Personally I thought this site was about using ingenuity to improve things not defending something that has been thrown together without much thought and ignoring a lot of common problems with this sort of device that others have solved and shared.

  2. I know this is a “because I can” hack but what would you need it for?
    This would be ill advised as a professional storage solution and seems overkill for personal use.
    Storing uncompressed bluray movies? Even then it feels like hoarding (1200 movies).

    1. media editing, simulations and probably many more applications i dont know about,

      but as soon as you need to store masters as well as their component parts(ie every single recording or sound in its original form, often in edited form as well) then it does begin to creep upwards, i know that 5TB is about the least i can make do with at the moment (also what i have, that is counting internal, external and my server)
      over time i can easily imagine 30TB to be a suitable size.

      1. Which would be painfully slow across a port multiplier. This thing looks like its designed for large storage of things accessed infrequently, not massive large project files or applications like video editing.

  3. I’ve thought about doing this myself because of the compact size. I currently have a similar setup in an ugly atx case hiding behind my TV. XBMCbuntu + transmission + samba + dedicated internet connection + Rii Touch = just awesome.

  4. This is a lesser performing and more costly setup than leaving the case exactly as is. The stock case holds 6×3.5″ drives, and then you could use the 5.25″ bay for 4×2.5″ drives. Still has room for an SSD OS drive underneath the 5.25″ bay. Right there, you have 10 SATA drives.

    If you want two more 3.5″ drives, there is room at the bottom of the case for them. leaving you to put an 8-port SAS HBA into the PCI-e slot. With 8×3.5″, 4×2.5″, 1×2.5″ SSD, you will easily have more storage than this setup. Just run Ubuntu & ZFS – you get massive performance too.

    I’m running this exact setup in this exact case. Running SABnzbd, SickBeard, CouchPotato, Plex, and standard file server stuff. I have ~36TB of raw storage, ~27TB after RAIDz configuration.

  5. This is a lesser performing and more costly setup than leaving the case exactly as is. The stock case holds 6×3.5″ drives, and then you could use the 5.25″ bay for 4×2.5″ drives. Still has room for an SSD OS drive underneath the 5.25″ bay. Right there, you have 10 SATA drives.

    If you want two more 3.5″ drives, there is room at the bottom of the case for them. leaving you to put an 8-port SAS HBA into the PCI-e slot. With 8×3.5″, 4×2.5″, 1×2.5″ SSD, you will easily have more storage than this setup. Just run Ubuntu & ZFS – you get massive performance too.

    I’m running this exact setup in this exact case. Running SABnzbd, SickBeard, CouchPotato, Plex, and standard file server stuff. I have ~36TB of raw storage, ~27TB after RAIDz configuration.

  6. Don’t know is this has already been addressed, but by using 3D printed parts he is pulling the controller boards and the drives out of the ground plane. At the very least he needs a shield running between all the drives, the controllers, and a ground point on the motherboard.Static could end up being a real issue here, especially when you consider that close to 80% of most hard drive failures that people attribute to mechanical failures are actually firmware corruption on the controller board.

    1. “close to 80% of most hard drive failures that people attribute to mechanical failures are actually firmware corruption on the controller board.”

      Citation needed, please.

      1. Agreed. I deal with faulty drives everyday I would say the breakdown is more like 50% mechanical failure 45% failure of components on the controller and maybe 5% firmware. Those firmware failures are usually on a specific series of drives known to have issues.

    2. 3 of the 7 SATA pins are ground. Quick multimeter check shows that this ground is connected to the case of the drive.

      Somehow I don’t think earthing is much of a problem. Hell if anything he’s avoided groundloops.

  7. Besides the overheating problem (you need air space between the drives for the fans to provide any type of cooling) performance will suck due to the port multipliers.

    Just because you can cram a bunch of equipment into a certain sized space doesn’t make it a good system design.

    1. You’re assuming that a) you need performance, and b) you need all drives running at once.

      I have a similar setup here. All the drives are off right now except for 2, they’ll spin up on demand, and somehow I don’t think the port multiplier will be a problem over even a gigabit network.

  8. What a genius idea to us a board without ECC memory to store shitloads of data. Any dead bit in those GBs of RAM can and will eventually cause the filesystem to crap out, since wrong data is written if a memory error is left uncorrected. Please take any cheap ECC capable mainboard for those tasks, even if it is an old Pentium III server board (there are lots of them available for cheap)

    Except maybe for computers used to calculate critical stuff like FEM structural analysis, where an error causes e.g. a bridge to fail, I can’t think of no application where ECC is more important than storing (and thereby writing) lots of data.

    It happened to me twice that a bad RAM stick caused a failure on a file system, both times on Linux. One was my laptop (Thinkpad X32),in which the memory stick had several large broken blocks, probably caused by a fried memory chip. This crashed the computer after a few minutes and completely devastated the file system on it, when I attempted to repair it.

    The other time it happened was on an old, always running torrent box in my apartment. I left it running for months, and one day when I looked after it, I found it has crashed and the file system was corrupted. Since I learned from the last time, i checked the memory using Memtest x86 and found a few broken bits. After I swapped the stick was able to repair the file system. This computer recently got swapped by an really old Dl 380 G2 with 6 1.5 TB Sata disks hacked into it, running Raid 6

    1. Exactly! If you are considering building something like this yourself, first do a quick google search on bit error rates for memory and hard drives. Once you get into multiple TB of storage, you absolutely NEED ECC memory –and– ideally full file system redundancy + checksumming (e.g. zfs with raidz). Anything less is just a ticking time bomb.

      Furthermore, using a custom PHP solution like this for redundancy is ill-advised. Your script will just happily keep making copies of bad data when errors occur, and over time these errors will accumulate. The only way to avoid this is by regularly checksumming all filesystem data and metadata all the way up the tree, and using parity / backups to repair bad data when it is found. You cannot do this with a userland PHP script.

      Finally, if you want to hack together something on the cheap, look into the SGI SAS expanders being sold on e-bay. They have been going for $150-$200 used for the past year or two, are rackmountable, and have 16 SATA bays. I did a little fan mod on mine last year, and have been very happy with them!

  9. Another area not mentioned anywhere is what kind of PSU setup is he planning on using. I would hope something pretty beefy and not a 300 Watt Besttech with a ton of Ys.

    1. Nah, hard drives don’t use that much power. Dunno if there’s enough brains in the system somewhere to stagger their startup, but even then, it’s what, 10 watts or so per drive?

      I do worry about the so-called cooling though, jamming a fan into such an airless cavity doesn’t really seem like enough. There can be little pockets of still air formed, and even then a fan’s not gonna do it’s best work squeezed and blocked off like that.

      It’s his case and he can do what he likes, but I’d give up a few more cubic inches of my habitat and use more space for the sake of keeping the drives cool and happy. If I needed 10 HDDs. Which I don’t.

      1. Why use SSD for the operating system? It’s read once to RAM, write never, so the performance boost you get is 40 seconds on the startup and then it’s just doing nothing. What a waste of money really.

        1. Nah all sorts of bits of the OS are being read all the time. It doesn’t load every library in one piece at boot time, bits are being paged in and out all the time as apps demand them. That and the registry, data files, etc need writing now and then.

  10. I once needed some extra storage in my computer. I only had a single 40GB drive at the time. The case was crammed full to the brim with everything it already had. So I held a second drive in with brown tape against the side of the case. Worked like that for a good two years until I replaced the system.

Leave a Reply to jcgCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.