Rendering And Blendering In A File Cabinet

The Blender Foundation has just received a new render farm. It came in the form of a four-drawer file cabinet something akin to the popular Ikea clusters. Each draw holds four motherboards, power supplies, and hard drives and the whole cabinet will eventually add up to a 16-node cluster. Join in on the geeky excitement by watching the delivery and unpacking video after the break. We love it when organizations share the details on the hardware they use.[youtube=http://www.youtube.com/watch?v=8eWJs9pygwU]

39 thoughts on “Rendering And Blendering In A File Cabinet

  1. Please, please, please. BE LESS CRYPTIC WITH YOUR LINKS. Yes, I am well aware that I can hover my mouse over the link and look down at the bottom of my browser to see where it is supposed to go. But that requires me to take my lazy eyes and move them from the article. This makes my lazy eyes sleepy and then i just close the browser.

    Please think of my lazy eyes :<

  2. Seems like cooling would be a major problem after the cabinet is fully populated.

    Doesn’t seem to have ANY advantage over a standard rack and 1U/2U systems.

    And lets not even get started on using desktop i7 cpu’s with non-ecc ram in a over heated server cluster.

  3. Nice Setup

    for the ones complaining about cooling

    it seems to me the room is airconditioned and the drawers are lacking a bottom

    so if you switch on the fan on the bottom, the whole cabinet gets cooled big time

    do you guys actually READ/WATCH an entry before you comment?

  4. Man, that just seems like a huge waste of work for the end result. Why not just build a DIY rackmount and get some 1U or 2U chassis and do it that way? Seems it would be way easier to service/cool, and it would look better too, not to mention that looks like an EMI nightmare…

  5. @Jake/vonskippy

    Rack stuff is annoying as heck, unless someone else is paying for it. Overpriced, non-standard, and generally a pain as far as I’m concerned. Stuck in a datacenter where reliabilty and space are paramount – it’s worth it. For something like this though, being able to use standard components probably cost 20% of what the equivalent done with 1/2U rack machines. ECC/Server level stuff has it’s place, but they aren’t exactly curing cancer or running wallstreet on the thing. Just making pretty pictures.

  6. gotta say, these guys sound like my kinda people. I love to shove computers into odd places(all jokes aside) that normal people just wouldnt even dare do. I currently have access to three “new” P4 comps that shall be mine to toy with. Nowhere, near as expensive as these guys, but I’m on a budget.

  7. @MrWazoo

    It is not only on HaD. It is everywhere.
    Commenting without reading TFA, trolling Arduinos and replying off-topic is just moronic.

    Yet, here I am replying off-topic.

  8. ” I’m out for a day at the zoo for the rest of today. ” and who gives a shit

    So my initial comment was that ASROCK is the worst brand on the market ever and ppl who using it to build servers probably have 0 sysadmin/hw expeirence.

    These dutch people thinks they are so el8 in computers and OSes but they are dipshit wankers like these on the movie.

    Any serious organisation would buy rackmount servers but these are just stupid kids.

    Using i7s instead of high-end XeonS is also a river of fail. You keep saying how rich country the Netherlands is then why do u build trashy machines?

  9. @MrWazoo,

    Simply placing a fan in the bottom of the cabinet and putting the cabinet in an air conditioned room doesn’t ensure adequate cooling. When you’re talking about a stack that’s likely going to draw over a kilowatt of power (assuming a very low 250W per drawer), you have to be a lot more careful planning the airflow.

    For example, if they place four drawers of equipment in this cabinet, the bottom drawer will get fresh air just fine. But it will exhaust heated air out the top. The next drawer up will be sucking heated air in, and exhausting even hotter air. When they get to the top drawer, it will be exactly the same as if they placed a 750W heater beneath their PC.

    Commercial rackmount equipment addresses the issue by drawing fresh air in from the front of the cabinet and exhausting the hot air out the back like a chimney.

    They might be able to reduce some heat by sharing power supplies. There’s no reason to have four independent power supplies (each producing some amount of waste heat) when they could use one larger supply to power all four boards in a drawer.

    I think they’d be better off if they planned for water cooling. In a liquid cooled environment, they could transfer the heat to radiators located well away from the cabinet and away from each other. They would still need the airflow through the stack to cool the rest of the components, such as the RAM, the north and south bridges, and the hard drives, but the heat management problems would be much lower than they would be trying to use airflow alone.

    And while the Blender logo grille at the top is extremely cool looking, I’m thinking they’ll eventually have to replace it with an actual duct to vent the hot air out of the building, at least during the summer months.

    John

  10. Why hi Kris; hello to you too!

    Glad to see you appreciated other people’s work so much that you found the time to put some effort into constructive criticism.

    Being an open source project, funded by donations and grants, the Durian Project doesnt have a bag of money it can just splash down to buy top-of-the-line Xeon machines; nor does it need to.

    My usual job, an ISP, is very different from what I’ve done here. When continuity is your prime concern and you have unique services that cannot be split across tons of redundant low-cost machines, then yes, you’ll want a nice Supermicro/IBM/Sun/whatever machine to run your stuff on, splurge on RAID with hot-spare, go for redundant ethernet uplinks, multiple PSU’s with seperate power-feeds; the works.

    When you have a problem like rendering, where missing a node for a while only results in a frame that needs to be re-rendered ‘sometime today’, or have other problems/services that can be split across multiple machines in an efficient/redundant manner; then it’s rather normal to go for a solution like given here; much like Google did when they started out (and they still do, for a great deal).

    Ofcourse, Kris, next time we’ll make sure to ask you for a good bunch of technical consultancy first; sorry for ever have done things another way than ‘the right way’.

  11. @Doug

    I dunno, man. Have you seen how cheap the generic 1U/2U stuff has gotten? Methinks they could have built a system that would be much better in the long run for just a little more. I guess it just depends on how long they even plan to use this toaster.

    Speaking of which, this unit should make some killer toast, especially in the upper drawers! :D

  12. Vonskippy… No advantage? so you tink saving several thousand dollar is not an advantage?

    you must wipe your butt with $100.00 bills to think that a lot less money spent is not an advantage.

  13. @kris.. “those” dutch people have done more in a day than you will ever do in your lifetime.

    Oh and Ton knows more about computers and programming than you ever hope to.

  14. Seems my earlier reply didnt make it (too long, most liklely); cutting it up in parts:

    @Pete

    It was a lot of work to do; but one of the things that was asked was to ‘make it look cool and give it lots of leds’. So I did and have no regrets doing so!

    The hole in the top was required in any case; dremeling it circular or ‘circularly lobbed’ wouldnt have mattered much. I was gonna leave it entirely purple (the cabinet’s original color) but due to an accident in handling the panel it got damaged and I decided to go ‘all the way’ and paint it in the color that the logo would normally have (or at least as close I could find it).

    Disclaimer: i work at an ISP; all I know is server-hardware. Yes it’s nice; but we didnt need IPMI/raid/triple-gbit/ECC-memory/etc. Just keep it simple; use off-the-shelf hardware. Hardware WILL fail; best have it be cheap hardware and never have one node be critical to your entire operation. When each node is identical to the other; who cares about one breaking down. Also, the biggest reason server hardware is so expensive is the un-standard form-factor. When you try to squeeze a machine into a couple of inches; require it to keep itself cool (with no standard coolers fitting) and also require a more expensive PSU (high-power, small-form-factor), then yes, you’re going to pay top dollar… err.. euro.

  15. …continued from previous post:

    The cooling is taken care of in the sense that the big fan takes care of dumping the heat generated by the cabinet’s processors quite efficiently out the top into the cabinet’s environment. As long as you keep the room reasonably cool, things should be fine. Also, thermal management and monitoring are being used to mitigate problems (as do built-in safeguards in modern motherboards/cpu’s).

    The cabling has been pre-done; each drawer has a total of 6 ethernet-cables (4 for direct use; 2 ‘just in case’) and one power-cable that leads to a 4-way socket-block that connect to the PSU’s. They’ve been wired and bundled in such a way that each drawer can fully extend for it’s full length out of the front of the cabinet while the boards are kept powered on. Only issue is cooling, then; but four boards seem to keep cool well enough on their own when exposed to open air (thermal venting).

    The power-consumption on 4 boards , in idle, was around 1.6Amps @ 220V with 80%+ PSU’s. Under load, it moved to about 2.2A , tops. That’s about 2000Watt of heat production to remove from the cabinet and room; luckily we have an option of venting to/from a cool concrete hallway right next door; as well as an (as of yet) optional air conditioning unit (sadly not a split-unit, but at least it dumps hot-air out the back through a hose)

    As far as performance goes: yes, GPU rendering would’ve been wonderful, if it’d have been anywhere close to production ready at this point for the Durian project (http://durian.blender.org). As we say in dutch ‘Je moet roeien met de riemen die je hebt’ (you gotta row with the oars you DO have).

  16. … and the next bit (yes I did say it was long):

    The triple-channel memory was chosen as a kind of trade-off; given the financial restrictions we were facing, almost none of the dual-channel boards we had access to had support for more than 2 DIMMS; giving 2x4GB as the required choice (2x2GB is nowhere close to what we needed and the boards with 4 slots all seemed to be mixed DDR2/DDR3; not both usable at the same time).

    The triple-channel boards all tended to have at least 6 slots, all usable and 6GB (3x2GB) is econimical and has a good chance of being ‘just enough’ for what’s required to render the scenes. Also, upgrading with 3 slots still empty should be more cost-effective, too.
    Given that rendering is also rather memory-intensive (textures, light-maps, etc), it sure won’t hurt to have a bit more bandwidth available in that department.

    Also; note that putting it in a filing cabinet was done on purpose; a hat-tip to the brave person who went before us; the creator of the Helmer Cluster.

  17. ..and the last bit (for now):

    Cooling, btw, is mostly done with the ‘brute force’ technique; getting as much air as possible to rush through to prevent pockets of stale hot air being re-used or re-circulated. Each drawer is gutted at the bottom; the back had the PSU’s exhausting their heat out the back (aswell as part of the inner heat). Given the extra options available to us, we feel reasonably confident we’ll have heat under control.

  18. @Justa, the power usage looks a bit low, if I did the maths right, the increase is 33watts from idle to load per PC, not much for CPUs rated 130 and with Intel very aggresive power saving. In reviews I found, the variation is 91 or 92 watts (hexus 207-116, bit-tech 216-124). So at idle the systems haven’t all the power savings on, or the load isn’t so big, or some other reason I’m unable to figure.

    And a pity you found no 1156 boards and had to go with 1366 socket, they should do fine with 4 sticks of 2GB, the CPUs are even lower power but with similar performance (or better when running one process, like non parallel parts of Blender). Whatever was saved in the mobo would cover the extra stick.

  19. @Justa,

    You have made an excellent build, and I did not mean to criticize the construction. But I am concerned that with ~500 watts per drawer that if you load up the cabinet you will bake the upper drawers in a fiery oven from below. Airflow is a trickier business than it seems — any stale pocket of air could easily build to over 100 degrees, easily enough to fry some of the weaker components such as electrolytic capacitors.

    I’m sure that with good and attentive monitoring you’ll catch problems before they become damaging. And I’m sure you’re clever enough to come up with new ventilation designs to compensate as you need to. Good luck, and congratulations again on a very pretty build.

    John

  20. Loveing the top grill, it looked amazing with the lighting on, as the maker said at the end. I Remember that as an accidental side effect with some green lighting in a case – the best part was the lighting coming out of a small rear mesh part of the case.

  21. I am impressed, and would like to have one too.

    But being as I’m saving money to build my own IKEA helmer cluster computer, I immediately saw one great disadvatage to this scaled-up version; how to access the hardware.

    Sure you can probably do it and they probably built in the means to get it done, BUT it seems like more work having to open up the protection of 4 mobos just to configure one of them. And if it’s not one on the ends you’re gonna have to take out the one right next to (“on top of”) it.. But they’ll probably be nice to it so they don’t have to:)

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.