We were delighted at a seeing 96 MacBook Pros in a rack a couple of days ago which served as testing hardware. It’s pretty cool so see a similar exquisitely executed hack that is actually in use as a production server. imgix is a startup that provides image resizing for major web platforms. This means they need some real image processing horsepower and recently finalized a design that installs 44 Mac Pro computers in each rack. This hardware was chosen because it’s more than capable of doing the heavy lifting when it comes to image processing. And it turns out to be a much better use of rack space than the 64 Mac Minis it replaces.
Racking Mac Pro for Production
Each of the 11 R2 panels like the one shown here holds 4 Mac Pro. Cooling was the first order of business, so each panel has a grate on the right side of it for cold-air intake. This is a sealed duct through which one side of each Pro is mounted. That allows the built-in exhaust fan of the computers to cool themselves, pulling in cold air and exhausting out the opposite side.
Port access to each is provided on the front of the panel as well. Connectors are mounted on the right side of the front plate which is out of frame in this image. Power and Ethernet run out the back of the rack.
The only downside of this method is that if one computer dies you need to pull the entire rack to replace it. This represents 9% of the total rack and so imgix designed the 44-node system to deal with that kind of processing loss without taking the entire rack down for service.
Why This Bests the Mac Mini
Here you can see the three different racks that the company is using. On the left is common server equipment running Linux. In the middle is the R1 design which uses 64 Mac Minis for graphic-intensive tasks. To the right is the new R2 rack which replace the R1 design.
Obviously each Mac Pro is more powerful than a Mac Mini, but I reached out to imgix to ask about what prompt them to move away from the R1 design that hosts eight rack panes each with eight Mac Minis. [Simon Kuhn], the Director of Production, makes the point that the original rack design is a good one, but in the end there’s just too little computing power in the space of one rack to make sense.
Although physically there is room for at least twice as many Mac Mini units — by mounting them two-deep in each space — this would have caused several problems. First up is heat. Keeping the second position of computers within safe operating temperatures would have been challenging, if not impossible. The second is automated power control. The R1 racks used two sets of 48 controllable outlets to power computers and cooling fans. This is important as the outlets allow them to power cycle mis-behaving units remotely. And finally, more units means more Ethernet connections to deal with.
We having a great time looking that custom server rack setups. If you have one of your own, or a favorite which someone else built, please let us know!
[Thanks to drw72 for mentioning R2 in a comment]
74 thoughts on “44 Mac Pros Racked Up To Replace Each Rack Of 64 Mac Minis”
Im gonna get alot of hate for this, but so be it.
Whats with these people thinking macs are powerhouses? i sincerely hope they are just using macs for the marketing ‘value’ of it, because you can get alot better performance for alot less money.
They built up their software on OSX and this is one way to scale without having to rewrite everything.
Here’s a quote to back you up: “Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance.”
> “Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance.”
this BS usually means they coded whole thing in Ruby or some other rube goldberg language
Except Ruby can run on any platform?
Yes, because Ruby uses the OS X graphics frameworks. Your comment is bad and you should feel bad.
Except OSX can be loaded up on non-Mac exclusive hardware, and the architectural difference is negligible. If it’s only for reasons of OS, it still makes no sense to use Mac hardware.
Except that you would not be licensed to run it and would be violating copyrights. Not a big deal for you in your home but when you are a business looking at a few million in fines and potential for instability that comes with unsupported hardware…
I wasn’t aware that running OSX on non Mac hardware was non-compliant with the license. That seems…so silly; but I’m not incredibly surprised, considering their product ethos.
Running OS X on non-Mac hardware is not licensed because the OS is not sold separately from hardware.
Yes, it violates the terms but apple has never really tried to squash it, just too much effort to do it for so few people that run it on generic PCs.
Plus it can be a pain in the butt to do it. I have a dell laptop running 10.9 and its such a mess to get working it is almost not worth it.
Ever heard of VMware? Since when is that a copyright violation?
“Ever heard of VMware? Since when is that a copyright violation?”
I could be wrong, but I think OS X is only licensed for use in VMWare when the VMWare host is running on Mac hardware.
True, but it’s not supported (IIRC, it’s against the license). I don’t know of many businesses that are willing to run critical operations on non-supported setups.
It can be done but it can be a pain in the butt when a new version of OS X comes out. Having to deal with racks full of hackintoshes would be a huge headache.
I guess it depends on their business plan and how much growth they expect, but I would at least start a project to investigate rewriting their code to a more portable/scale-able format. Perhaps something that would support either cloud compute, or GPU massive parallelism. It depends on the type of processing you are doing, but I think some image processing can be done with parallel algorithms.
Solutions like these indicate incompetence on the part of project management, because they most likely hired a bunch of amateurs. Otherwise they would have noticed from the start that coding on/for macs isn’t going to be the most cost-effective way to expand business.
I think that pretty much sums it best.
Riight. as opposed to listening to “experts” on some random website.
“Riight. as opposed to listening to “experts” on some random website.”
You don’t have to trust me to understand that building your application to depend on an expensive niche OS and hardware that is highly locked in may not be a very good idea overall.
The common case when that ends up happening is when you got a bunch of “visionaries” who have more ideas than competence, who then obtain just enough general knowledge of IT and programming skills to produce something that works, and their original choice of platform pretty much ends up being because some random “expert” on a website told them Apples are easy to use when they were just starting.
After they’ve done that for a while and need to expand, they’re stuck becuase they don’t know how to port their application to some sensible platform, and it takes too much time and money to hire outside experts to do the work for them, so they take the easy way out and just throw more money at the problem.
(I’m the datacenter manager at imgix)
Planning for success is a central maxim of my work. My personal feeling is that most startups who are based in AWS (or similar) will be crushed by success, because AWS basically doesn’t give you any economies of scale. At a low volume, the difference in cost between $1/gflop and $0.30/gflop isn’t going to break you; at a high volume it absolutely will.
Many numbers get crunched as part of my work, and this solution is definitely the most cost effective and scalable for our needs now and in the near term. Our costs for this datacenter build vs the equivalent amount of compute capacity in AWS are literally orders of magnitude apart.
When this solution is no longer the best one for us (and it won’t be forever!), we’ll adjust course. Flexibility and nimbleness are essential.
If you thought AWS was the only alternative, no wonder you have locked yourself into Apple hardware and are now the direct opposite of nimble and flexible.
OSX needs Aside,
Speed is one thing, reliability is another. for the price, my apple hardware has always outlasted my pc’s of equal specification. Loose analogy…A PT Cruiser is faster and cheaper than a Toyota Land Cruiser, but the components of the Land Cruiser are designed for longer service life and so likely will be ‘on the road’ much longer, which can also justify the up front price difference.
Better ‘performance’ may not just be raw speed. It may mean that you aren’t replacing power supplies or bad disks or reinstalling the MB and OS because of some 10 cents cheap-out on capacitor choice at the supplier level.
That’s funny, because macs are not statistically any better than the average computer. Apple profit margins are roughly double that of the rest of the industry, which means dollar for dollar, you are getting less for hardware. They’re just selling up the market.
I agree that Apple hardware generally has good reliability, although certainly not perfect. The Mac Pro has ECC RAM and uses Xeon one socket CPUs, which is high-end relative to standard desktop equipment.
So far we haven’t had any hardware failures with the Mac Pro. I’m sure it will happen eventually, but anecdotally reliability has been very good. The Mac Minis in our previous design have also fared reasonably well, but there has been a small (and acceptable) failure rate.
The Mac Pros are certainly not in the same ballpark as the conventional server models we use in the other parts of our environment, with redundant PSUs and out-of-band management controllers, but they have been well within my parameters for reliability.
>my apple hardware has always outlasted my pc’s of equal specification
Nvidia GPUs/MPCs in appe laptops? apple laptops in general with their total lack of ventilation, and crucial components located in unfortunate places (charging sense connected by small vias next to the edge of pcb where it catches moisture from environment and corrodes, air backlight fuse that never blows and lets backlight inverter/lcd connector fry to name few). Dying cpu boards on Power Mac G5? and so on
macs are riddled with bad hardware design.
The aluminum Powerbook G4’s were a nightmare mess inside. The bottom shell is a thin stamping, to which several screw posts are welded. A die cast magnesium stiffener/frame screws to all those posts. Apple developed a process to cast metal onto plastic for RFI shielding for their old Macs, why not do the same on aluminum?
Then there’s a main board, to which *everything* connects with a short ribbon cable, even if the peripheral butts right against the edge of the board. That’s adding many more potential failure points and a lot more cost.
All the excessive amount of screws makes a pretty good handful, and there are many different sizes.
In an average non-Mac laptop the bottom shell will be a single piece plastic molding or magnesium casting, usually with some access panels for RAM, hard drive, WiFi card and sometimes the CPU. Except for the keyboard, display and pointing device, peripherals will either be integrated on the main board or will plug directly into edge mounted connectors. Most screws will be the same thread and most of them will be the same length.
My current laptop is a mpc TransPort T2500 (rebranded Samsung X65). Its housing is mostly die cast magnesium, powder coated black. Other parts are stamped aluminum. CPU is a Core 2 Duo x64 and it has 4 gig RAM and can access all of it. (Some laptops contemporary with it were hardware limited to 3.25 gig even with 64 bit CPU and OS.) The only thing lacking on this laptop from 2008 is it has a CardBus slot instead of ExpressCard. (Really, Samsung? CardBus in 2008?) I just swapped in a 500 gig hard drive in place of its original 160. Runs Windows 7 x64 Ultimate quite nicely. I may make it my experimental Win10 machine. Might also see if I can scrounge up a Blu-Ray drive for it. If I was skilled at BIOS hacking I’d see about re purposing the Turbo RAM mini PCIe slot for a USB3 card and replace the right front port with a USB3 one. Now if someone would make a WiFi/USB3 combo mini PCIe card…
Um, the Powerbook G4 was discontinued ten years ago. Do you have any relevant anecdata?
TCO can be calculated a number of ways. I won’t touch some vendors after having been burned. I figure any additional cost goes to the vendors profit and support chain. And honestly I have not had 3+ hour arguments with Apple over failed hardware. I get a box and I get stuff fixed. Other vendors that rhyme with HELL took months and still didn’t fix the 30+ failed units, got sued for the trouble but in the end customers like me NEVER got anything out of it. If I figured that cost into the equation even at 4x the price Apple wins every time. My time is worth something, and any company that fails to respect that dons’t deserve business.
Strange as I’ve had the exact opposite experience. Apple turned me away because my equipment wasn’t “damaged enough” to repair. A tech actually told me to throw it on the floor to “break it” so that they could replace it. Yeah, not going to happen. At the same time, I’ve never had any issues with deal. I order a server, it’s there the next day. Install the OS etc and I’m back on my way in 5 minutes. Try getting a hold of Apple for some xserve support, you’ll be on hold for days if not months.
I want to know if/how they’re going to sell off the mini’s that got pulled! :D
Using a stack of them to prop open doors?
lol or target practice, could also extract the gold from them
(I’m imgix’s datacenter manager)
The original design (R1) with Mac Minis is still in service for the moment. We’re able to make good use of them for a certain class of work. At some point in the next few years I anticipate all of those racks will get retired, at which point I plan to swim in a vault of Mac Minis ala Scrooge McDuck.
They are probably using them for compiling and testing.
how do you think a GPU cluster would work against this ? Some custom software to handle all GPU processing …. Wouldn’t that be cool ? It’s totally possible that these Mac pros are using their GPUs and using the tunderbolt interfaces for communication.
Not sure what this costs, but I bet it could process a lot of data.
(I’m imgix’s datacenter manager)
We don’t use the thunderbolt ports at present (although that’s how we would do 10 gig Ethernet should it become necessary), but we do make extensive use of the GPUs in these systems. There are a number of reasons why Mac Pros make sense for us, and their GPUs are towards the top of that list.
May you please elaborate on the “why Apple”?
As far as I’m informed, OSX does provide a unix based environment which does not give any advantages over other systems so I cannot see why it makes sense to use Apple hardware instead of any other generic computing platform.
an obviously not well enough informed user! :)
The big reason to use OS X for this kind of work is to leverage its graphics frameworks in your production pipeline. It’s pretty powerful to be able to compile a shader and perform all the operations in one pass on an image, rather than resizing, then sharpening, then adjusting contrast, etc. This lets us do a lot of work simultaneously, which I think is pretty important for an on-demand image manipulation service like imgix.
With that said, we aren’t just a light wrapper around CoreGraphics — we have a bunch of smart people working on our image rendering pipeline and we’re doing a lot of things that CoreGraphics can’t natively do. But we do stand on their shoulders to an extent — Apple also smart people working on this, and they’ve had much longer to work on the problem.
OS X is also unix-y, which is nice insofar as we can re-use a lot of the same tools that we use in the rest of our environment (which is Linux-based). ssh, ansible, heka, etc. all run and largely work the same on OS X and Linux. If that wasn’t the case, I would be a lot less jazzed about running OS X in production.
So your main workload runs on a the GPU, which on OS X anyway boils down to OpenGL. Why can’t you compile and run these shaders on OS X, especially as you say you’re doing a lot more than what CoreGraphics has to offer?
Why not just throw some cuda-cards into the racks? D:
I get the need for Macs in some of the previous articles (testing, etc)… but for raw image processing, I don’t see how Mac Pros can possibly be cost effective…
I don’t think this is the case any longer, but at release buying a whole Mac Pro was actually less expensive than buying the pair of GPUs used inside — plus you get a whole computer.
In this case though, their image processing is just built on core OS X components, and it was less expensive to build this than to port it.
“it was less expensive to build this than to port it.”
Until your business grows. Oops.
(I am imgix’s datacenter manager)
You’re right, at some point we will reach an inflection point where it makes sense to port rather than continue developing these type of rack designs. Right now, the math doesn’t add up, but we’re flexible and nimble. When it makes sense, we’ll adjust our roadmap accordingly.
Without diving too deeply into financial details, this rack design is very cost effective for us and we’re happy with our current scale and growth rate.
Oh. Okay well that makes perfect sense.
OSX based cloud providers are rare, but they do exists.
I bet you can rent a lot of machine time for the price of all that apple hardware.
Nobody remembers the G4? Nobody has seen universities doing this for years and years? I thought racking macs was a thing of the late nineties? I don’t see how this is a hack.
Very cool regardless. I love seeing macs beeing pulled for their supercomputing power. Maybe it’s time to take a trip down memory lane.
(I’m imgix’s datacenter manager)
I wouldn’t personally call it a hack, as a lot of thought and effort went into making this an operationally maintainable solution. I see it as a case of devising a solution tailored specifically to our needs using general purpose components, which does relate to the hacker ethos.
*lol*in germany we have a spelling for this: “Fehlplanung” …
And the result of “Fehlplanung” is called “Misswirtschaft” ;P
And do it a few times with Mac Minis and Mac Pros and you’ll get a “Bankrott”
All I have to say about that topic:
This is why you don’t let code monkeys, especially Mac code monkeys, design your infrastructure.
this company is really good at solving the port access and the cooling issues on mac pros and minis on the rack. http://www.mk1manufacturing.com
All manufacturing and assembly is local and design iterations make it to the production line quite fast.
(I’m imgix’s datacenter manager)
I do like MK1’s products in general, and I used their rack shelves on the R1 design with Mac Minis. However, I just wasn’t that impressed by their Mac Pro offering, so I decided to go a different route. But MK1 is great, and if it makes sense I’d be happy to use their products again in the future.
I’ll take a few of the machine that are being replaced ! :)
Something that wasn’t really mentioned above is just how much processing power (CPU & GPU) you get from a Mac Pro based on its physical size. Try finding another professional, commercial workstation that offers anywhere near the processing power you get from a workstation anywhere near as small as a current Mac Pro.
This is in a “Server” Rack. In general, workstations have no place here (except where required for licensing purposes as previously mentioned). By all means you can build your own more powerful workstation than a mac pro. But why bother with workstations if you don’t have to especially when you can get the built-in redundancy and reliability, not to mention more power, with enterprise class hardware.
It wasn’t mentioned because they’re stuffing desktops into a rack instead of using servers. I can’t imagine this solution will scale better, and is a better idea than just re-writing their libraries to work on standard hardware (GPU accelerated).
Small is relative, and in this case they are just funky shaped systems that are completely un-expandable, un-upgradeable, and non-user repairable, three things that should NEVER be in a server rack (which is why they’re ditching a rack full of mini’s). I’d like to see the spreadsheet that compared the pricing/performance between Mac Pro’s AND the time and material to custom rack mount them versus, you know, real servers. My guess, it was designed by some Mac Fanboy that didn’t even look at real options, cause you know, Mac’s.
Sorry, I’m afraid that I can’t share my actual numbers publicly. However, I can share these numbers, which are based solely on list price of the hardware:
Mac Pro config (4 systems in a 4U chassis):
– 4x Mac Pro ($4600)
– Intel E5-1650 v2
– 16GB RAM
– 256GB SSD
– 2 x D700
– Our custom chassis
Capex only: $0.70 per gflop
Linux config (4 systems in a 4U chassis):
– SuperMicro F627G2-FT+ ($4900)
– 4x Intel E5-2643 v2 – 1 CPU each ($1600)
– 8x 8GB DIMMs – 16GB each ($200)
– 8x 500GB 7200rpm (RAID1) HDD – 500GB RAID1 boot drive ($300)
– 8x AMD FirePro S9050 – dual GPU ($1650)
Capex only: $1.03/gflop
– g2.2xlarge @ 3 year reserved pricing discount ($7410)
Instance operating cost only: $3.23/gflop
(originally posted by me at https://news.ycombinator.com/item?id=9510610)
The chassis we designed is very cost effective — remember: it’s just sheet metal, it’s entirely passive.
Also, Mac Pros do have easily serviceable memory and storage components. The other components are accessible and serviceable, but they do require some hardware expertise (which we have).
Before I even looked at these numbers I was doing it up in my head and thinking “yep, the Macs are cheaper to operate long term”. I think you made the right choice.
Sounds more like your vendor is ripping you off if these are the numbers you get.
List prices are the prices advertised to the public before any negotiation has occurred. As you might imagine, most businesses do not pay list price.
F627G2-FT is 2P capable => 8P in a 4U rack, twice that of 4 Mac Pros in 4U and the costs/Gflop numbers are suddenly completely different.
Run it a few years and get the unavoidable hardware failures (Fans, HDDs, GPUs, RAM and occasionally a CPU) and you’ll love the Supermicros, since hardware failure (except for RAM) in a Mac Pro => Put trashcan into bigger trashcan
We use SuperMicro FatTwins for storage, app and database servers. Not that exact model, but that’s why I used it in my example. They are pretty good overall.
Our work is not CPU bound, it’s GPU bound. So double the CPUs does not increase flops (for us).
Here is the only reason you need to know: https://angel.co/imgix
Cool 5 mil of that sweet sweet VC money to burn.
besides every graphics designer knows Macs are best at graphics, DUH!
It is really simple – If you have to use MACs to achieve performance then what you are running is designed incorrectly!
Linux is far more performant and you only have to look at the TOP500 list of supercomputers to see that (more than 94% Linux).
One Mac is so professional and so creative ! for graphic. So how about 44 Macs ? God like :)
Resizing images is so creative !
You guys are definitely never heard of http://www.opencompute.org/ and miss what is the whole web service business is about. So you got the proof of concept on macs fine, and you start scale that up rather than rewrite the whole system to a more capable, faster, cost-effective, reliable system, than you will fail. It’s maybe stylish to haul around garbage with fleet of mini cooper S convertibles, but not the brightest idea. And to be honest, nobody in the web service business is crazy about your build, because you are not in the same business as they are. (you guys are bringing a knife to a gunfight)
Hopefully someone still reading this blog. Is the 4-MacPro chassis a custom-built chassis? Or can it be purchased somewhere? I am looking to move the MacPros in my rack from shelves to racks. All I have seen only allow 2 MacPros per 4U.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)