Why store it in the cloud when you could have a 90 Terabyte hard drive (translated) array in your house? The drives are mostly Western Digital Caviar Green EARS 2TB models which are known for energy efficiency and quiet operation. It’s a little unclear as to whether this is using one or two motherboards, but the drives are connected using PCI RAID5 and RAID5+0 controller cards. There’s a total of 40 cooling fans built into the case, half on the bottom and the rest on the top. They move air up through the case, with plans to add a dust filter in the future. Heck, with that type of air movement you could throw on a standard furnace filter. Apparently it is quiet enough to talk in “almost a whisper” while next to the plywood monolith. But we’re a bit skeptical of that claim.
It’s not quite as fancy looking as the 67 TB storage from last year… but it does look pretty easy to build at home.
[Thanks Henrique via EnglishRussia]
I would be interested to know what the standby power draw was on that thing. I am guessing somewhere in the region of 300+ watts… (which is a lot to have on all the time)
there’s $1000 worth of fans on that thing!
According to Giz, it’s only 70TB, who’s right….who’s right….we’ll find out after the jump.
I have a (very) old IBM 20Mb(ooooh!!!) disk drive that I use as a paperweight. It reminds me of how fast storage capacity grows since there’s usually a 8Gb USB drive sitting near it.
Any bets on the timeframe to see this lovely bit of homebrewed architecture in a monolithic device you can carry in your pocket?
Why store it on the cloud? Fire suppression systems, backup generators, the oil contract supplying them, the armed security guarding access to the NOC. There are more reasons, but those are sufficient for anyone doing anything significant with 90 TB of data.
By the way, could someone explain the odd habit of computer builders to use a bunch of tiny little fans rather than one big (cheaper/more efficient) one in a case big enough to use it?
I still want to know how he’s chaining all those drives together.
@onaclov2000: According to the everest dump on his site the total capacity is 91423Gb, which is technically 89.2Tb.
The scary thing to me is that he has 14Tb free, the rest of the space is used up. Thats a ton of stuff he is storing on there.
how much was everything??
says right in the article it’s two mobos.
The machine translation of the website is funny… Especially the comments:
“Why sink, mother dear?”
“at failure of any of the components (motherboard, controller) that at least a hand job to replace.”
;-)
Whoah… Well, that’s a lot storage for pr0n.
@scott
Inertia aka that’s the way we’ve always done it. It’s waaaaaay more efficient to use a big fan or two at slower RPM than it is to use a bunch of little fans. But little fans are cheap and plentiful, so people use them.
Also, there’s a sort of area rule about how much air you can shovel through a tiny opening, and tiny openings are easier to find than large ones in most applications.
It’s kinda like electrical wiring – before we realized that lots of outlets and big standardized circuits were the way to go, house wiring would often have just two little fuses that were expected to cover everything… or very ornate fuse boxes that had a bunch of 5 and 10 amp screw in style or cartridge fuses.
In general, “that’s the way we’ve always done it” has a lot of benefits, even if “it’s the best way” isn’t one.
I realize this is less DIY, but it seems getting a Blackblaze storage pad from protocase would be better for any type of application. And wouldn’t require two mobos.
I don’t work for either company.
That’ll hold a lot of pron.
*drool*
Nice build but I really dig the wire soldiers!!
Yeah the soldiers are awesome!
Hope he ran a chassis ground wire to each of those hard drives, or the fellow lives in a very high-humidity and low esd environment…
Plywood isn’t exactly the best ESD-preventing (or for that matter, EMI-preventing) case.
I guess it doesn’t really matter; there are several pictures where he has hard drives just sitting out on carpet. Good thing this doesn’t have to meet FCC regs!
Does it end up as one big lun anywhere?
If all the adapter cards are in one box he could use LVM to stripe a single logical volume over the entire array. Hey-presto, storage visualisation.
I think that the current rate of HDD data corruption from background cosmic ionizing radiation is around 1bit/TB/year (on average, different at the poles and magnetic weak spots). There is also silent bit rot – non comercial HDD’s are running at the edge of of what is currently possible. I personally would have went with a metal case and used ZFS1/ZFS2 for the filesystem/RAID to avoid data corruption/loss. Instead of RAID5 + RAID50 and Windows 2008R (I hope windows indexing is disabled). But I am still impressed, great build
and the reason he didnt use ZFS was?!?!
With ZFS, the file system and volume manager are not abstracted from each other (like pretty much every hardware/software raid setup), which brings about huge benefits (rebuilding for example, detecting silent coruption and fixing it on the fly).
I don’t know if this relevant to his application, but one reason not to use the cloud is bandwidth. I imagine he can get multiple 100 MB/s from that thing – getting an internet connection that fast (if it’s even available in his locale) is going to be very expensive.
How about 90 times them, and 90 PC’s for a Cluster?
Running Linux Puppy on a Flash, of course:)
It has to be said: “On a clear disk, you can seek forever.” And on this baby that probably holds true.
One thing I noticed is that, his enclosure being wood, it would therefore be naturally insulated. At first that seemed like a *really* bad idea, until it occurred to me that a vent structure could send all the heat out of the room. I didn’t see such a vent, but I assume that’s the case.
And looking at the picture of all the disks and cables, it just all looks like toys to me. I’d have an AWESOME time putting that stuff together. Very cool project.
Dammit, stuck in a Wikipedia maze again. I click a link about RAID arrays, and then another, and another, and soon I’m on theoretical particle physics. Guess I’ll just have to read my way back out.
@Gene The former eastblock as well as russia have the best internet these days, better than the US, check out various tables from speedtest research.
For example: http://speedtest.net/global.php
Not that local storage solutions aren’t always faster.
now he needs a room full of displays and to display all the images Intersect (from ‘CHUCK’) style
Actual source website, with MUCH more detail: http://basanovich.livejournal.com/163813.html
Everest (Russian) log of hardware internals: http://basik.ru/maxx/bas/gravicapa/report.txt
(crap, ignore previous comment. Clearly I didn’t note the original site was linked already)
I hope that builder has better luck with his WD Green drives than I did … tried to build two 8TB servers using green drives, ended up tossing them all due to a glitch WD built into the drives, making them unsuitable for raid… google wd+green+tler
There are several things wrong with this setup.
First off, with that many drives it’s pure insanity not to use RAID-60. Twice the performance and you can lose up to four drives without any data loss.
Second, if he used multi-lane SAS controllers w/ expanders he could have saved the cost of several controllers and a motherboard while reducing power consumption and increasing performance (again).
Third, the fan array is just painful to look at. He could have just used two box fans, (for redundancy :), and had better air-flow with less noise and MUCH lower cost.
So yeah, it’s cute, and I hate to bag on something that somebody obviously spent a lot of time on, (definite hacker cred for the plywood! :), but there’s definitely some glaring deficiencies in this design. Maybe he’ll make some changes in the next version?
Plywood is just the thing for ESD protection, especially in a reasonably humid environment. From a corrosion standpoint, maybe not so hot, but when you have big fans and decent grounding (exposed wire grounding to catch stray electrons), it’s OK.
ZFS: Don’t get me started. Yes, it’s cool. You do NOT want a giant single volume, and if you don’t have a backup plan (and @90TB, you don’t) you’re just asking to get reamed. You will lose data by doing this, but go ahead, find out why all by yourself. Infinite Logical Volumes smell nice and all, but after getting nailed repeatedly by loss of 30 TB databases, I’ll leave it to the ZFS fan boys. Putting all your eggs in one basket is a bad idea, no matter how cool the ZFS feature set is.
WD Green: If it says made in china, you have a >50% chance of death by mechanical failure. Look for OEM drives from any place but china. There’s a reason they’re so cheap – almost no Q/C at all. The most recent 1TB Green I purchased failed SMART for excessive write errors within 60 minutes of uptime. Seagate seems to be back to old tricks again, as well.
Did you know that drive manufacturers turning out product in China bid your HDs out to the lowest bidders in batches of a few thousand? And the bids are awarded on price and price alone. Not a chance in hell for Deming’s methods when there is no accountability. If a drive manufacturer (OEM) screws someone, a new shell company forms to “do a better job”, at least until the check clears.
That aside – if you’re a RAID believer, please buy your raid drives from different manufacturers and different vendors. Do not buy 8 drives from one batch of X, because MTBF is an imaginary number.
bilbao bob: you are wrong. Oracle engineers RECOMMENDS to create one and only one volume.
For example on their Sun Fire X4500 server, which hosts 48 drives in 4U, they recommend one 46-drive zpool for the data (plus a 2-drive zpool for the OS). I think you are confused by the difference between a raid group (in zfs terminology: top-level vdev) and a data volume (zpool). For example you certainly don’t want one large N-drive raidz (raid5) vdev because if 2 of the N drives are lost you lose data, but in zfs one zpool is made of multiple vdevs. For example to optimize my X4500 servers, I configured them with one 46-drive data zpool composed of three 15-drive raidz2 vdevs plus one spare drive.
I meant to say:
…to optimize $/GB on my X4500 servers…
to be honest i think the backblaze design is cleaner, but cool none the less.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
It seems to me that theres two motherboards… is that a second CPU i see on the left there? Hmm =)
But thats one sexy piece of raid setup =D
Cool Setup.
I wouldn’t know what to do with all that disk space except downloading all films, software, ebooks and papers I could.
It’s probably also handy to crack WPA passwords by using a gigantic 90Tb rainbow table
ZFS fanboys?! its not like it comes in a shiny metal case with a fruit on it, give it some credit!
how much of his 90tb actually ends up not-corrupted he will have no way of knowing or correcting, unless of course each of his files are MD5d and checked according everytime he reads the file…
I wouldnt like to setup a storage solution, even for home use, using so many different raid cards / differing pci busses / multiple systems…
Its not a contigous space as theres only a single gigabit between the machines?
Might have to post up my 1/15th scale project of this, in smoked acrtylic.
I have thought about using a furnace filter and making a closing cabinet for the three PCs I have in the basement. I can’t see why it wouldn’t be a great idea.
Then, I’ve also thought of using an air filter from a car for an intake and maybe some sort of small muffler for an exhaust for airflow. They’re cheap, available all over the place and made for hot environments needing lots of forced air and some noise dampening.
Can anyone school me on this being a bad idea?
Bit of a shame that the wiring and air seem to consume >50% of the total space. With so many drives you would want to much sure you can rebuild the array if a few drives fail not just one or two. Anyone that has had a RAID5 with 4 disks knows that as soon as that one disk fails and you start rebuilding another disk is going to fail just to stop you doing a complete rebuild… Luckily with Linux’s MD arrays you can actually access data from a partial array, so not all is lost.
The air filter idea rocks (hey use a K&N and wash it out!)
The muffler…not so much.
The point of the muffler is to reduce the noise of exhaust from an internal combustion engine while working out compromises for the flow of the gasses.
You don’t have that noise to deal with so it you want the LOOK of a muffler you could go for something simulated or a real one cleared out of its baffles.
A real muffler will be large and restrict air flow for your application I think.
That is a heck of a lot of fans, but an awesome thing to be sure.
I’d love to hear it spinning up.
@ Jarrod, 91423Gb (gigabit) = 91.423 Tb (terabit). Which is the same as 83.15 tebibit. Get your units of measure straight.
Tb = Terabit, TB = Terabyte, Tib = Tebibit, TiB = Tebibyte.
1 TB = 1.000 GB = 931.3 GiB = 0.9095 TiB
Hmmm… Id go with à 1×1 meter fan(or 2) for a better flow and lesser noise.
@MRB – Oracle recommends this because it simplifies THEIR support of your database. Backup, data loss, all that annoying little real world stuff is your problem. And in general, until you get to RAID-10 and transactional backups, you’re not even close to protected. LVM is a grand idea, but it has real world consequences. When you move to LVM environments, your ability to recover useful data from partial drive recoveries drops to zero. But maybe all the anecdotal evidence I hear about from the IT guys at investment houses and research labs is FUD.
I’ll just say this: A lot of people have mistaken impressive conceptual promises for real world guarantees, and been bitten hard. Wanna know where all those scanned mortgage documents a certain BOA constrictor can no longer find went after the merger and data migration? I’ll give you a hint: When you don’t test your ability to restore data, you don’t actually HAVE the ability to restore data. As they say, quod erat demonstrandum.
@guy who wanted a muffler
What you want is a baffle, not a muffle. You can probably do it by trial and error. Your goal is to just silence the noise of the fans. The bad news is that doing this “right” requires a lot of calculation. However, you can fake it by leaving an open area at the end of the box, and then building 3 movable dividers that take up 60-75% of the height of the box and stagger them.
If you simply move them back and forth, and try it, you can empirically get to the right positions.
I wish I could draw a sketch for you, but just google “audio baffle” and you’ll get the idea. All you need to do is make it hard for the sound waves from the fans to get out by setting up a standing wave to cancel out the frequencies of the fans. You could calculate this by using an audio spectrum analyzer to figure out the primary signals (and maybe a harmonic or two), then do a little math to come up with the wavelengths, and so forth.
This is too much work for the backyard experimenter, so just move the baffles around until it gets quieter. If you’re not comfortable with using 60-75% partitions, you can also drill small holes in the baffles and make them full height. Make sure the holes don’t line up. The hole method isn’t great for temperature reduction, because it destroys any hope for laminar flow.
You can pack way more baffles in if you limit the redirection of the air flow angle to ~16% at a time – and if you have the room to use curves, you can do a great job killing noise. This is all the rage in the door-less bathroom designs found in public places (the ones where they do it right), but… acoustic engineering is a big topic.
I’ve seen plexiglass panels used in baffles with good success if they are mounted in a frame using a flexible weatherstrip or gasket… perfect for angry screaming monkeys, or small children who are crashing from a sugar high.
Finally, you can absorb some high frequency noise (from drives, for example) by using rubber or flex compounds to provide mechanical isolation. It makes a huge difference! If you cover the baffles with acoustic foam or even neoprene (the material they use in wet suits), you can pretty much treat the unit as being silent.
Good luck!
> bilbao bob: I’ve seen plexiglass panels used
> in baffles with good success if they are
> mounted in a frame using a flexible
> weatherstrip or gasket… perfect for angry
> screaming monkeys, or small children who are
> crashing from a sugar high.
Dad, is that you?
Defrag that, bitch xD
Nice!
chkdsk /x