Back in 2018, Microsoft began Project Natick, deploying a custom-designed data center to the sea floor off the coast of Scotland. Aiming to determine whether the underwater environment would bring benefits to energy efficiency, reliability, and performance, the project was spawned during ThinkWeek in 2014, an event designed to share and explore unconventional ideas.
This week, Microsoft reported that the project had been a success. The Northern Isles data center was recently lifted from the ocean floor in a day-long operation, and teams swooped in to analyse the hardware, and the results coming out of the project are surprisingly impressive.
Perhaps the most interesting statistic to come out of the project is the sheer reliability of the hardware. The underwater servers were eight times more reliable than a replica data center built on land. The leading hypothesis behind this astonishing number is the benefits of the dry nitrogen atmosphere within the underwater pod, reducing corrosion, as well as the absence of meddling humans who may accidentally bump or damage hardware in regular land-based operations.
The team also collected samples of the atmosphere within the tube, aiming to determine the effect of outgassing from cables and other equipment inside. With a better understanding of the factors that led to such a large improvement in reliability, the Project Natick team hopes to improve reliability in traditional data centers, too.
But you can seal racks of servers in nitrogen anywhere, why sink them to the bottom of the ocean? Cooling costs can be much reduced, as the low temperatures of the sub-surface seawater provide ample cooling capacity, something simply not available on land.
Terrestrial data centers use large amounts of water, both indirectly due to electricity generation and directly in cooling applications. The underwater concept promises to reduce this immensely, with less energy required to keep the servers cool. The concept also allows for servers to be located close to coastal cities for low-latency and high performance, without requiring expensive inner-city real estate, a trend we’ve seen in other utilities like the floating power generation under way in Brooklyn.
The underwater concept does have drawbacks, too. Hands-on maintenance is simply not possible, meaning any failed servers must simply be left offline. Additionally, deploying an underwater data center can be more difficult than building out facility on land. Engineers also have to consider sea currents and potential damage from marine life or shipping, problems that simply don’t exist on land. It’s also possible that the build-up of barnacles or other marine detritus on an underwater pod could reduce cooling efficiency over time. While Microsoft’s 2-year experiment only had a thin layer of algae and barnacles, this will differ widely with deployment location and time.
Fundamentally, the huge improvement in reliability and reduced energy costs go a long way to justifying the underwater data center concept. Lessons learned will benefit data centers on land, too. Ultimately, whether we see more servers deployed below the waves is yet to be decided. Availability of land versus the availability of underwater locations and whether or not the reliability benefits can be applied to conventional data centers will be the deciding factors as to how this technology develops further.
65 thoughts on “Underwater Datacenter Proves To Be A Success”
I expect reliability is also somewhat improved by the big metal can and liquid shielding reducing single-event upsets.
That is what I was thinking, then I remembered that Microsoft write nondeterministic code so how can you tell what really caused the problem. ;-)
Maybe a datacenter could be built next to a dam. Water from above the dam would be directed over the pod and then fall down below the dam. The water might not be as cold as the deep ocean but it would be flowing so the heat would get carried away rather than create a local warm spot. Also, there wouldn’t be so much build up on the outside to require a power washing.
Muiltiple pods could be used, each pod would have it’s own flow that could be turned off for servicing. I suppose the dam might as well be providing the electricity to power the servers too.
Put them behind the dam. The deep water is usually quite cold.
Behind the dam you still have to have a hoist to lift them back out and probably also clean them before maintenance. I’m picturing them being submerged in channels cut around the dam with some kind of gate to block the channel when needed for maintenance. No lifting required. Just close the gate, wait a moment for the water to drain away then you can pop the capsule open and begin maintenance.
maybe that would work short term but long term you would still have problems with cavitation/erosion of the surface. that’s why deep and quite water would be better: no strong current to damage the surface of the pods.
Long term you won’t need them at all. Technology will change and everyone will wonder why you built all that expensive concrete and steel structure.
Biggest issue with that for me isn’t the flowing water and inevitable FOD impacts or rather high flow rate required but the fact there just isn’t enough geography to build hydro dams anyway. So as an idea even though it could be made to work it just doesn’t scale up for real deployment, where mastering dumping server farms in deep calm waters does.
If a company wanted to offer such services at an existing dam site it would cost a small fortune to effectively rebuild the darn thing… The deep water on the other side makes more sense, if you really want frequent access don’t build a sealed single pod that needs to surface, build in an airlock or tube to the surface.. But personally I don’t see the use – remote config is long established, and the odd machine failure that can’t be remotely rectified can just be ignored – Over the deployment’s useful lifespan doesn’t look like there will be enough failures to worry about that would be like scrapping your SSD as soon as it uses a single one of its ‘spares’ or your HDD for a single bad block..
I can see some major benefits to locating data centers on transoceanic fiber routes as well. If a standard umbilical for power and data were designed, you could swap them easily. Add in a deflated balloon and some gas canisters to lift the data center on command, and swapping out DCs that require refurbishment or maintenance becomes simpler.
Which three letter agency did you say that you work for?
Well that’s a rather interesting new kind of DoSA when an attacker remotely lifts the data center.
 Destruction of Service Attack ;-)
A limpet mine would do nicely..
As in the old Don Knotts movie?
Well it’s certainly interesting, but it would technically work on any data centre. ;-)
Just trying to figure out how big the hydrogen sack would need to be in order to lift a server room or data centre…. New use for a flotilla of zeplin…
Heck even a single rack unit fully populated will be in the hundreds of KG…
OK, seriously, despite the hype, I totally cannot see any real engineering advantage to burying your data center in the sea.
Assuming you don’t need to be out in the middle of the ocean for logistical reasons (like where two cables come together) the only real advantage I can see is cooling.
Which is really more an issue about having cold water than the actual location.
Assuming you *need* hundreds of kilowatts of cooling it means you need hundreds of kilowatts of power, which implies being close to shore where power delivery is reliable and easy. if you’re only a little offshore, then just move the thing onshore and pump the cold ocean water instead.
Yes, you use more energy, but it doesn’t take much water to carry off megawatts. The extra expense of pumping cooling water has got to pale in comparison to all the costs of building and maintaining a marine… anything.
The marine environment is a hideously expensive place to do any function you can otherwise pull off in a dry warehouse, plus, in this case there’s the additional issue of having no physical access for 6 months at a time.
Apparently, because Google has somehow designed servers that never need any servicing, and connectors that always stay connected.
I’m willing to be wrong, but somehow I’m just not seeing this pencil out except for places where there is no other option.
I suspect the reality is that it is not practical. However the act of doing the impractical, working out the engineering, and completing the task may yield practical information. What worked well underwater may show us things that can make things work better here in the world of practical application.
These types of crazy applications are what helps us to learn new things.
There is also this: https://archiveprogram.github.com/
Is it practical to store software as images on film in the artic? Are there lessons to be learned by doing it that could be valuable? I bet there are.
Do you mean the same “Artic” that is melting and won’t be there anymore in 20 years time?
They could have stored it inside a mountain, there are many bunkers already. It is just marketing.
I expect it’s also a matter of land costs. It might not cost much in America, or in the Siberian wasteland, but this kind of installation would allow them to put data centres near most major cities in the world without any land costs.
And cohabitate with the offshore wind farms – a ready local supply of power with the only requirements not there by default being rather larger data lines..
Ultimately though its a good place to put them for other reasons too – water is great at absorbing just about everything that can harm a computer (the notable exception being water itself). So its a very cheap way of gaining that kind of hardness.
Build data centre in middle of Tokyo, NY, London or Amsterdam You’ll quickly find out how ridiculous land prices in top cities are.
Well, google, having quite large data centres, clearly goes for cheap electricity combined with political stability. Only a fool would build a data centre in before mentioned cities.
In electronics heat = wear. Cooling is not an afterthought, cooling is your application efficiency, cooling is your TCO, cooling is your reliability level. A huge improvement in cooling that reduces costs is no joke.
You don’t do functions to it in the water, so you don’t need to add that cost to the analysis. Connectors really do stay connected, if you never disconnect them. If you disconnect them once a month to wave a rubber chicken at them in the name of maintenance, then they’re loose and they do randomly disconnect.
I predict a new crime, data centre theft.
One thing about salt water is that it shields RF amazingly well, so basically forget about RF communications if it is more than a few meters (several feet) under water. So these data centers would need an internal ultrasonic buzzer to broadcast for several days after theft ( I was going to say their “location” but they can not know their location, maybe they could broadcast their depth. But pressure reading for depth would change depending on the salinity of the water which changes with depth and location ). Or maybe some inflatable bags to raise it to the surface to find it’s location (GPS) and broadcast it with short duration high burst transmissions, a burglar alarm.
I can see a plot point in some future film where a group of UAVs detach the cables, disconnect the anchoring and weld on an upside down hydrofoil so that the tank can be dragged away by a boat, while remaining at the bottom of the ocean to block RF.
I would guess about 200 servers at 20k a pop, does 4 million sound about right ? Well they have been depreciated over 2 years, so probably worth a hell of a lot less than that by now.
I suspect that the real win is going to be with some kind of TAX avoidance with these “Go anywhere” data centers, anchored in international water between multiple countries.
*ponder* RATES … Redundant Array of Taxfree Expensive Servers.
One thing is certain though they will need to be fully upgraded every 2 years, just because of failed parts.
I guess an even better reason for stealing a datacenter would be the data stored there….
In the image above I’m not seeing much in the way of data storage, normally on a storage system you would typically see something like 14 hot swappable SAS (Serial Attached SCSI) disk drives per shelf and then multiple Disk Shelves connected by SFP (Small Form Pluggable) transceiver modules.
I’m not seeing anything like that in the images above. So my suspicion is that it was actually a data processing center and not a data storage center. Anything with moving parts basically fails, and needs to be replaced. So data storage center under the ocean where you can loose all your data if there is a plumbing fault, not so useful.
>> I predict a new crime, data centre theft.
How about data center attacks that aim to break things in a way that cannot be fixed remotely, that seems like a simple route to blackmail.
Of course, I suppose if someone locked up your pod, you could try for the ultimate reboot, by just cycling power to the box. You’d probably have to let it sit there for the better part of a day, though, because any engineer worth his salt would have big battery backup in there.
Brings a whole new meaning to “data pirates!”
“No, no, you don’t understand, the pirates boarded the data center and towed your IP to an undisclosed location.”
Lends new meaning to the phrase, “whatever floats your boat”.
it’s kinda more like “Whatever sinks your servers”
They could paint them with copper paint. It does a good job on salt water vessels of keeping stuff from growing on it.
Yes, but copper coatings/paints for marine vessels is illegal.
How about a copper ship?
It is only the paint, copper cladding is fine. But monel is recommended, that’s a nickel and copper alloy.
Only in Washington state only on vessels under 65 feet.
« Hands-on maintenance is simply not possible, meaning any failed servers must simply be left offline. »
If given the budget, I’d feel pretty confident building a robot that can do this work, as long as all connectors are selected with this in mind too. Cost of the robot might not be low enough that just leaving the servers be would be the most sensible way to go…
While possible, I don’t think it makes sense. If you have some type of robot that can swap parts of the system out, you still have to store the spares and have space for the mechanism to function. Get rid of the robot and add a “couple more plugs” so the spares can be brought online in the same slot they are being stored in…
A maintenance robot would only need a very small access port, it doesn’t need to live inside the data center.
And the robot could have a parts bay, like Bender.
so they can’t just ssh into the machine and run ‘sudo systemctl restart X’? That’s gotta suck.
Why can’t they?
I saw another article that mentioned the nitrogen environment was pressurized. I think that might make it a more efficient heat transfer fluid. If they pressurized it to balance with external water pressure at a depth of 30ft, that’s an extra atmosphere of absolute pressure so the air is twice as dense and has twice the heat capacity for a given volume. It doesn’t seem like the viscosity of the gas changes much with pressure changes.
Why not just add “water cooling”?
Like, the traditional kind, but with the radiator in the ocean, rather than the air.
They get sufficient cooling without it. It’s more hardware to fail and biologicals like to grow on your radiators and clog them up.
ya, because tons of cooling fans is the best way to go because cooling fans never fail right?
You don’t need forced water or air cooling at all. There are probably heatpipes improve the heat flow from the hottest components, but that would be it.
How do you replace a broken harddrive or a network card in that metal can?! I guess you don’t just wait until enough stuff breaks then lift it out for repair. Would make me feel better knowing the servers are running Linux/BSD instead of some microsht product. Also I wonder why doesn’t it use the seawater as a form of watercooling flowing through the waterblocks installed in the servers.
“Waiting until enough stuff breaks before extracting the pod” is exactly what they would do. It’s not particularly different from other datacentres that are constantly deploying new machines – replacing an individual failed unit isn’t worth the labour it costs. Instead you just leave them in situ and replace whole racks/pods at a time.
Microsoft should move all of its data centers under water.
How would it compare to just building datacenters in very cold climates?
The problem there is one of latency.
Why don’t they just set up the servers in Canada. Free cooling.
but no infrastructure…
The real take away is not putting your data center underwater, but simply putting it in a nitrogen atmosphere. Would be pretty cool to put on a space suit or SCBA gear to swap out a 1U server.
Do you even need Nitrogen? What if you seal san enclosure with a ton of silica gel? Is it the moisture that’s the problem?
Propane regulators go down to very low pressures, you could probably rig an ammo can to hold 0.5psi with gaskets and silicone behind connectors for a real nitrogen system.
Reminds me of hydrogen fuel cell powered cars. Just because you CAN do it doesn’t mean there’s a valid reason TO do it.
What was the reason for this besides marketing points since they should be able to simulate the heat transfer scenarios and even do small scale validation tests if needed. Material properties are well known along with their heat transfer capabilities, underwater cables, etc.
This is the new version of Geraldo Rivera’s Al Capone’s Vault show but without any drama at all.
Q: How to increase reliability of equipment?
A: Put it somewhere deep and far, where nobody can mess with it.
Can the same be done with decommissioned nuclear submarines as offshore power-plants?🤔🤔
Would be cool to see the results of the same experiment carried out by a competent company.
Microsoft’s approach would be to cram code, fire those that understood and proudly wave the banner of ineptitude.
Unless the salty water could be used somehow to self-produce the electricity needed to power required, I don’t see the benefits of it beyond what you could do with a freshwater install or simply using geothermal to harness the 55F degree temperature of the earth 8 to 12 feet down on conjunction with a heat pump. Then you could still get access to the servers for physical maintenance.
If you have plenty of these data centers in the location, will they at some point warm up the water and destroy the ecosystem?
No, water circulates, in a phenomenon I think is called “currents”, so hot water rises to the surface and disperses so wide it can’t impact much, and cold water is pulled from the bottom to replace the hot water. This should very efficiently solve the issue of datacenter cooling.
If I was designing this, I would make minimalist submarines, that can automatically come back to the surface when asked to, for repairs/upgrades/inspection, then go back down to the ocean floor and start doing their work again.
If this was done off the coast of NYC, I expect it’d be profitable and bring a lot of latency advantages to people/bots like high speed traders, meaning this could be sold at a very high price to financial institutions/companies.
It’s very possible hiring scuba-diving server technicians would be a much cheaper option though. Want us to look into that option?
Anyone remembered Sun Microsystems’ project Blackbox? They were quite mobile and reliable. Just not waterproof )
Why not use the excess heat from the cooling for hot water of a city? A computer is essentially a electric radiator converting electricity to heat, instead of wasting it they could have used it…
Please be kind and respectful to help make the comments section excellent. (Comment Policy)