10-drive microserver is the clown car of the computer case world

ten-drive-itx-clown-car

[Coke Effekt] wanted to push his server’s storage limits to a higher level by combining ten 3 TB drives. But he’s not interested in transitioning to a larger case in order to facilitate the extra hardware. It only took a bit of hacking to fit all the storage in a mini-ITX case.

His first step was to make a digital model of his custom drive mount. This uses two 3D printed cages which will each hold five drives mounted vertically. To keep things cool the two cages are bolted to a 140mm fan. The connections to the motherboard also present some issues. He uses a two-port SATA card which plays nicely with port multipliers. Those multiplier boards can be seen on the bottom of the image above. The boards are mounted using another 3D printed bracket. Each breaks out one of the SATA ports into five connections for the drives.

[Thanks Pat]

Comments

  1. Isaac says:

    Let’s not also forget he wrote his own high level redundancy solution (in php). One of the many awesome hacks on good ole OCAU.

  2. adam felson says:

    Why not buy a server case? I bet the $90 for an entry level server case (which will have room for 10 drives including far superior cooling) is cheaper than his solution.

    • m1ndtr1p says:

      Because he didn’t want to transition to a bigger case, as stated in the second sentence of the article, he wanted to stick with his mini-ITX case (which also looks MUCH better than any entry level “server case”… Also, it saves him $90 and the annoyance of moving everything over to a new case, not to mention a “server case” would need a rack to be mounted in. So how is that cheaper than his solution? This didn’t cost him a penny as he already had all the materials needed…

      • meh says:

        Except that here he had to pay for port multipliers which are like $50 each, then plastic for the 3D printer, lots of time, etc. And IMO one 140mm fan isn’t adequate for 10 drives at full load. Ideally, you’d also have staggered drive spinup and hotswap.

        Either ways, I’m pretty happy with my Norco RPC-4020 and its 20 drive bays (I’ve improved the cooling though).

        • Whatnot says:

          At full load? Is he running google on the thing?

        • Garbz says:

          So how does a case get around the lack of SATA ports?

          As for *gasp* investing time … please leave hack-a-day, you won’t find anything of value for you here.

          • eldorel says:

            A lot of mid tier server cases come with their own multipliers built in.
            The entire drive cage has a pcp mounted on it that mulitplexes one sata connection into 4 or more drives instead of you having to order those separately.

  3. Bob says:

    Not to mention performance will suffer due to vibration

  4. steve says:

    This will overheat and cause drive failure.
    hope he has a boat load of spare drives

    • m1ndtr1p says:

      Did you even bother reading the original thread linked in the article? The case has perforations at the bottom, the HDD cage is on standoffs and there is a 140mm fan blowing upwards sandwiched between both cages… So cooling will be more than adequate, and probably better than what you get out of most cases out there.

  5. Anonuous says:

    damn, what the hell would you need 30TB for?

  6. lloyd says:

    I’d be worried about cooling

    • Garbz says:

      I wouldn’t. Drives don’t generate much heat at all. One 140mm fan will easily cool the entire bay providing every drive has some form of airflow across it.

  7. Brian Shafer says:

    I’m not sure if this site is about hacking.. or saving money. Please someone inform me.

  8. baobrien says:

    Real servers are also like clown cars.

  9. Bitflusher says:

    I know this is a “because I can” hack but what would you need it for?
    This would be ill advised as a professional storage solution and seems overkill for personal use.
    Storing uncompressed bluray movies? Even then it feels like hoarding (1200 movies).

    • oodain says:

      media editing, simulations and probably many more applications i dont know about,

      but as soon as you need to store masters as well as their component parts(ie every single recording or sound in its original form, often in edited form as well) then it does begin to creep upwards, i know that 5TB is about the least i can make do with at the moment (also what i have, that is counting internal, external and my server)
      over time i can easily imagine 30TB to be a suitable size.

      • AKA the A says:

        Trouble is that all of the drives are connected through a single PCIe link, that will seriously cripple performance…so media editing might be an issue :P

    • snowluck says:

      The company I intern with often uses 10TB project files, so… there is that.

      • Garbz says:

        Which would be painfully slow across a port multiplier. This thing looks like its designed for large storage of things accessed infrequently, not massive large project files or applications like video editing.

  10. Christopher says:

    I’ve thought about doing this myself because of the compact size. I currently have a similar setup in an ugly atx case hiding behind my TV. XBMCbuntu + transmission + samba + dedicated internet connection + Rii Touch = just awesome.

  11. raider says:

    This is a lesser performing and more costly setup than leaving the case exactly as is. The stock case holds 6×3.5″ drives, and then you could use the 5.25″ bay for 4×2.5″ drives. Still has room for an SSD OS drive underneath the 5.25″ bay. Right there, you have 10 SATA drives.

    If you want two more 3.5″ drives, there is room at the bottom of the case for them. leaving you to put an 8-port SAS HBA into the PCI-e slot. With 8×3.5″, 4×2.5″, 1×2.5″ SSD, you will easily have more storage than this setup. Just run Ubuntu & ZFS – you get massive performance too.

    I’m running this exact setup in this exact case. Running SABnzbd, SickBeard, CouchPotato, Plex, and standard file server stuff. I have ~36TB of raw storage, ~27TB after RAIDz configuration.

  12. raiderj says:

    This is a lesser performing and more costly setup than leaving the case exactly as is. The stock case holds 6×3.5″ drives, and then you could use the 5.25″ bay for 4×2.5″ drives. Still has room for an SSD OS drive underneath the 5.25″ bay. Right there, you have 10 SATA drives.

    If you want two more 3.5″ drives, there is room at the bottom of the case for them. leaving you to put an 8-port SAS HBA into the PCI-e slot. With 8×3.5″, 4×2.5″, 1×2.5″ SSD, you will easily have more storage than this setup. Just run Ubuntu & ZFS – you get massive performance too.

    I’m running this exact setup in this exact case. Running SABnzbd, SickBeard, CouchPotato, Plex, and standard file server stuff. I have ~36TB of raw storage, ~27TB after RAIDz configuration.

  13. Mono.aov says:

    *yawn

  14. John C. Reid says:

    Don’t know is this has already been addressed, but by using 3D printed parts he is pulling the controller boards and the drives out of the ground plane. At the very least he needs a shield running between all the drives, the controllers, and a ground point on the motherboard.Static could end up being a real issue here, especially when you consider that close to 80% of most hard drive failures that people attribute to mechanical failures are actually firmware corruption on the controller board.

    • Eccentric Electron says:

      “close to 80% of most hard drive failures that people attribute to mechanical failures are actually firmware corruption on the controller board.”

      Citation needed, please.

      • promethius326 says:

        Agreed. I deal with faulty drives everyday I would say the breakdown is more like 50% mechanical failure 45% failure of components on the controller and maybe 5% firmware. Those firmware failures are usually on a specific series of drives known to have issues.

    • Garbz says:

      3 of the 7 SATA pins are ground. Quick multimeter check shows that this ground is connected to the case of the drive.

      Somehow I don’t think earthing is much of a problem. Hell if anything he’s avoided groundloops.

  15. vonskippy says:

    Besides the overheating problem (you need air space between the drives for the fans to provide any type of cooling) performance will suck due to the port multipliers.

    Just because you can cram a bunch of equipment into a certain sized space doesn’t make it a good system design.

    • Garbz says:

      You’re assuming that a) you need performance, and b) you need all drives running at once.

      I have a similar setup here. All the drives are off right now except for 2, they’ll spin up on demand, and somehow I don’t think the port multiplier will be a problem over even a gigabit network.

  16. MadTux says:

    What a genius idea to us a board without ECC memory to store shitloads of data. Any dead bit in those GBs of RAM can and will eventually cause the filesystem to crap out, since wrong data is written if a memory error is left uncorrected. Please take any cheap ECC capable mainboard for those tasks, even if it is an old Pentium III server board (there are lots of them available for cheap)

    Except maybe for computers used to calculate critical stuff like FEM structural analysis, where an error causes e.g. a bridge to fail, I can’t think of no application where ECC is more important than storing (and thereby writing) lots of data.

    It happened to me twice that a bad RAM stick caused a failure on a file system, both times on Linux. One was my laptop (Thinkpad X32),in which the memory stick had several large broken blocks, probably caused by a fried memory chip. This crashed the computer after a few minutes and completely devastated the file system on it, when I attempted to repair it.

    The other time it happened was on an old, always running torrent box in my apartment. I left it running for months, and one day when I looked after it, I found it has crashed and the file system was corrupted. Since I learned from the last time, i checked the memory using Memtest x86 and found a few broken bits. After I swapped the stick was able to repair the file system. This computer recently got swapped by an really old Dl 380 G2 with 6 1.5 TB Sata disks hacked into it, running Raid 6

    • tom says:

      Exactly! If you are considering building something like this yourself, first do a quick google search on bit error rates for memory and hard drives. Once you get into multiple TB of storage, you absolutely NEED ECC memory –and– ideally full file system redundancy + checksumming (e.g. zfs with raidz). Anything less is just a ticking time bomb.

      Furthermore, using a custom PHP solution like this for redundancy is ill-advised. Your script will just happily keep making copies of bad data when errors occur, and over time these errors will accumulate. The only way to avoid this is by regularly checksumming all filesystem data and metadata all the way up the tree, and using parity / backups to repair bad data when it is found. You cannot do this with a userland PHP script.

      Finally, if you want to hack together something on the cheap, look into the SGI SAS expanders being sold on e-bay. They have been going for $150-$200 used for the past year or two, are rackmountable, and have 16 SATA bays. I did a little fan mod on mine last year, and have been very happy with them!

  17. promethius326 says:

    Another area not mentioned anywhere is what kind of PSU setup is he planning on using. I would hope something pretty beefy and not a 300 Watt Besttech with a ton of Ys.

    • Greenaum says:

      Nah, hard drives don’t use that much power. Dunno if there’s enough brains in the system somewhere to stagger their startup, but even then, it’s what, 10 watts or so per drive?

      I do worry about the so-called cooling though, jamming a fan into such an airless cavity doesn’t really seem like enough. There can be little pockets of still air formed, and even then a fan’s not gonna do it’s best work squeezed and blocked off like that.

      It’s his case and he can do what he likes, but I’d give up a few more cubic inches of my habitat and use more space for the sake of keeping the drives cool and happy. If I needed 10 HDDs. Which I don’t.

  18. Jim says:

    BEASTLY! Talk about a compact storage solution! I ENVY THIS SETUP my friend!

    • Jim says:

      I had to double check that you didn’t let me down and not use an SSD for the OS…I see that you did and I am soooo digging the setup.

      • Dax says:

        Why use SSD for the operating system? It’s read once to RAM, write never, so the performance boost you get is 40 seconds on the startup and then it’s just doing nothing. What a waste of money really.

        • Greenaum says:

          Nah all sorts of bits of the OS are being read all the time. It doesn’t load every library in one piece at boot time, bits are being paged in and out all the time as apps demand them. That and the registry, data files, etc need writing now and then.

  19. MatsSvensson says:

    Hot stuff!

  20. VeeBee says:

    I once needed some extra storage in my computer. I only had a single 40GB drive at the time. The case was crammed full to the brim with everything it already had. So I held a second drive in with brown tape against the side of the case. Worked like that for a good two years until I replaced the system.

  21. Scott Bruce says:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 94,586 other followers