Underwater Datacenter Proves To Be A Success

Back in 2018, Microsoft began Project Natick, deploying a custom-designed data center to the sea floor off the coast of Scotland. Aiming to determine whether the underwater environment would bring benefits to energy efficiency, reliability, and performance, the project was spawned during ThinkWeek in 2014, an event designed to share and explore unconventional ideas.

This week, Microsoft reported that the project had been a success. The Northern Isles data center was recently lifted from the ocean floor in a day-long operation, and teams swooped in to analyse the hardware, and the results coming out of the project are surprisingly impressive.

Smooth Sailing

Deploying and retrieving the Northern Isles data center took about a day each, respectively.

Perhaps the most interesting statistic to come out of the project is the sheer reliability of the hardware. The underwater servers were eight times more reliable than a replica data center built on land. The leading hypothesis behind this astonishing number is the benefits of the dry nitrogen atmosphere within the underwater pod, reducing corrosion, as well as the absence of meddling humans who may accidentally bump or damage hardware in regular land-based operations.

The team also collected samples of the atmosphere within the tube, aiming to determine the effect of outgassing from cables and other equipment inside. With a better understanding of the factors that led to such a large improvement in reliability, the Project Natick team hopes to improve reliability in traditional data centers, too.

Servers in the data center are retrieved for further study to determine the beneficial effects of underwater operation.

But you can seal racks of servers in nitrogen anywhere, why sink them to the bottom of the ocean? Cooling costs can be much reduced, as the low temperatures of the sub-surface seawater provide ample cooling capacity, something simply not available on land.

Terrestrial data centers use large amounts of water, both indirectly due to electricity generation and directly in cooling applications. The underwater concept promises to reduce this immensely, with less energy required to keep the servers cool. The concept also allows for servers to be located close to coastal cities for low-latency and high performance, without requiring expensive inner-city real estate, a trend we’ve seen in other utilities like the floating power generation under way in Brooklyn.

The Project Natick data center was power washed to remove the layer of algae, barnacles and other detritus that built up over its two year stint underwater.

The underwater concept does have drawbacks, too. Hands-on maintenance is simply not possible, meaning any failed servers must simply be left offline. Additionally, deploying an underwater data center can be more difficult than building out facility on land. Engineers also have to consider sea currents and potential damage from marine life or shipping, problems that simply don’t exist on land. It’s also possible that the build-up of barnacles or other marine detritus on an underwater pod could reduce cooling efficiency over time. While Microsoft’s 2-year experiment only had a thin layer of algae and barnacles, this will differ widely with deployment location and time.

Fundamentally, the huge improvement in reliability and reduced energy costs go a long way to justifying the underwater data center concept. Lessons learned will benefit data centers on land, too. Ultimately, whether we see more servers deployed below the waves is yet to be decided. Availability of land versus the availability of underwater locations and whether or not the reliability benefits can be applied to conventional data centers will be the deciding factors as to how this technology develops further.

65 thoughts on “Underwater Datacenter Proves To Be A Success

  1. Maybe a datacenter could be built next to a dam. Water from above the dam would be directed over the pod and then fall down below the dam. The water might not be as cold as the deep ocean but it would be flowing so the heat would get carried away rather than create a local warm spot. Also, there wouldn’t be so much build up on the outside to require a power washing.

    Muiltiple pods could be used, each pod would have it’s own flow that could be turned off for servicing. I suppose the dam might as well be providing the electricity to power the servers too.

      1. Behind the dam you still have to have a hoist to lift them back out and probably also clean them before maintenance. I’m picturing them being submerged in channels cut around the dam with some kind of gate to block the channel when needed for maintenance. No lifting required. Just close the gate, wait a moment for the water to drain away then you can pop the capsule open and begin maintenance.

        1. maybe that would work short term but long term you would still have problems with cavitation/erosion of the surface. that’s why deep and quite water would be better: no strong current to damage the surface of the pods.

        2. Biggest issue with that for me isn’t the flowing water and inevitable FOD impacts or rather high flow rate required but the fact there just isn’t enough geography to build hydro dams anyway. So as an idea even though it could be made to work it just doesn’t scale up for real deployment, where mastering dumping server farms in deep calm waters does.

          If a company wanted to offer such services at an existing dam site it would cost a small fortune to effectively rebuild the darn thing… The deep water on the other side makes more sense, if you really want frequent access don’t build a sealed single pod that needs to surface, build in an airlock or tube to the surface.. But personally I don’t see the use – remote config is long established, and the odd machine failure that can’t be remotely rectified can just be ignored – Over the deployment’s useful lifespan doesn’t look like there will be enough failures to worry about that would be like scrapping your SSD as soon as it uses a single one of its ‘spares’ or your HDD for a single bad block..

  2. I can see some major benefits to locating data centers on transoceanic fiber routes as well. If a standard umbilical for power and data were designed, you could swap them easily. Add in a deflated balloon and some gas canisters to lift the data center on command, and swapping out DCs that require refurbishment or maintenance becomes simpler.

        1. Just trying to figure out how big the hydrogen sack would need to be in order to lift a server room or data centre…. New use for a flotilla of zeplin…
          Heck even a single rack unit fully populated will be in the hundreds of KG…

  3. OK, seriously, despite the hype, I totally cannot see any real engineering advantage to burying your data center in the sea.

    Assuming you don’t need to be out in the middle of the ocean for logistical reasons (like where two cables come together) the only real advantage I can see is cooling.

    Which is really more an issue about having cold water than the actual location.

    Assuming you *need* hundreds of kilowatts of cooling it means you need hundreds of kilowatts of power, which implies being close to shore where power delivery is reliable and easy. if you’re only a little offshore, then just move the thing onshore and pump the cold ocean water instead.

    Yes, you use more energy, but it doesn’t take much water to carry off megawatts. The extra expense of pumping cooling water has got to pale in comparison to all the costs of building and maintaining a marine… anything.

    The marine environment is a hideously expensive place to do any function you can otherwise pull off in a dry warehouse, plus, in this case there’s the additional issue of having no physical access for 6 months at a time.

    Apparently, because Google has somehow designed servers that never need any servicing, and connectors that always stay connected.

    I’m willing to be wrong, but somehow I’m just not seeing this pencil out except for places where there is no other option.

    1. I suspect the reality is that it is not practical. However the act of doing the impractical, working out the engineering, and completing the task may yield practical information. What worked well underwater may show us things that can make things work better here in the world of practical application.

      These types of crazy applications are what helps us to learn new things.

      There is also this: https://archiveprogram.github.com/
      Is it practical to store software as images on film in the artic? Are there lessons to be learned by doing it that could be valuable? I bet there are.

      1. Do you mean the same “Artic” that is melting and won’t be there anymore in 20 years time?
        They could have stored it inside a mountain, there are many bunkers already. It is just marketing.

    2. I expect it’s also a matter of land costs. It might not cost much in America, or in the Siberian wasteland, but this kind of installation would allow them to put data centres near most major cities in the world without any land costs.

      1. And cohabitate with the offshore wind farms – a ready local supply of power with the only requirements not there by default being rather larger data lines..

        Ultimately though its a good place to put them for other reasons too – water is great at absorbing just about everything that can harm a computer (the notable exception being water itself). So its a very cheap way of gaining that kind of hardness.

    3. In electronics heat = wear. Cooling is not an afterthought, cooling is your application efficiency, cooling is your TCO, cooling is your reliability level. A huge improvement in cooling that reduces costs is no joke.

      You don’t do functions to it in the water, so you don’t need to add that cost to the analysis. Connectors really do stay connected, if you never disconnect them. If you disconnect them once a month to wave a rubber chicken at them in the name of maintenance, then they’re loose and they do randomly disconnect.

  4. I predict a new crime, data centre theft.

    One thing about salt water is that it shields RF amazingly well, so basically forget about RF communications if it is more than a few meters (several feet) under water. So these data centers would need an internal ultrasonic buzzer to broadcast for several days after theft ( I was going to say their “location” but they can not know their location, maybe they could broadcast their depth. But pressure reading for depth would change depending on the salinity of the water which changes with depth and location ). Or maybe some inflatable bags to raise it to the surface to find it’s location (GPS) and broadcast it with short duration high burst transmissions, a burglar alarm.

    I can see a plot point in some future film where a group of UAVs detach the cables, disconnect the anchoring and weld on an upside down hydrofoil so that the tank can be dragged away by a boat, while remaining at the bottom of the ocean to block RF.

    I would guess about 200 servers at 20k a pop, does 4 million sound about right ? Well they have been depreciated over 2 years, so probably worth a hell of a lot less than that by now.

    I suspect that the real win is going to be with some kind of TAX avoidance with these “Go anywhere” data centers, anchored in international water between multiple countries.

    *ponder* RATES … Redundant Array of Taxfree Expensive Servers.

    One thing is certain though they will need to be fully upgraded every 2 years, just because of failed parts.

      1. In the image above I’m not seeing much in the way of data storage, normally on a storage system you would typically see something like 14 hot swappable SAS (Serial Attached SCSI) disk drives per shelf and then multiple Disk Shelves connected by SFP (Small Form Pluggable) transceiver modules.

        I’m not seeing anything like that in the images above. So my suspicion is that it was actually a data processing center and not a data storage center. Anything with moving parts basically fails, and needs to be replaced. So data storage center under the ocean where you can loose all your data if there is a plumbing fault, not so useful.

    1. >> I predict a new crime, data centre theft.

      How about data center attacks that aim to break things in a way that cannot be fixed remotely, that seems like a simple route to blackmail.

      Of course, I suppose if someone locked up your pod, you could try for the ultimate reboot, by just cycling power to the box. You’d probably have to let it sit there for the better part of a day, though, because any engineer worth his salt would have big battery backup in there.

  5. « Hands-on maintenance is simply not possible, meaning any failed servers must simply be left offline. »

    If given the budget, I’d feel pretty confident building a robot that can do this work, as long as all connectors are selected with this in mind too. Cost of the robot might not be low enough that just leaving the servers be would be the most sensible way to go…

    1. While possible, I don’t think it makes sense. If you have some type of robot that can swap parts of the system out, you still have to store the spares and have space for the mechanism to function. Get rid of the robot and add a “couple more plugs” so the spares can be brought online in the same slot they are being stored in…

  6. I saw another article that mentioned the nitrogen environment was pressurized. I think that might make it a more efficient heat transfer fluid. If they pressurized it to balance with external water pressure at a depth of 30ft, that’s an extra atmosphere of absolute pressure so the air is twice as dense and has twice the heat capacity for a given volume. It doesn’t seem like the viscosity of the gas changes much with pressure changes.

  7. How do you replace a broken harddrive or a network card in that metal can?! I guess you don’t just wait until enough stuff breaks then lift it out for repair. Would make me feel better knowing the servers are running Linux/BSD instead of some microsht product. Also I wonder why doesn’t it use the seawater as a form of watercooling flowing through the waterblocks installed in the servers.

    1. “Waiting until enough stuff breaks before extracting the pod” is exactly what they would do. It’s not particularly different from other datacentres that are constantly deploying new machines – replacing an individual failed unit isn’t worth the labour it costs. Instead you just leave them in situ and replace whole racks/pods at a time.

  8. The real take away is not putting your data center underwater, but simply putting it in a nitrogen atmosphere. Would be pretty cool to put on a space suit or SCBA gear to swap out a 1U server.

    1. Do you even need Nitrogen? What if you seal san enclosure with a ton of silica gel? Is it the moisture that’s the problem?

      Propane regulators go down to very low pressures, you could probably rig an ammo can to hold 0.5psi with gaskets and silicone behind connectors for a real nitrogen system.

  9. Reminds me of hydrogen fuel cell powered cars. Just because you CAN do it doesn’t mean there’s a valid reason TO do it.
    What was the reason for this besides marketing points since they should be able to simulate the heat transfer scenarios and even do small scale validation tests if needed. Material properties are well known along with their heat transfer capabilities, underwater cables, etc.

    This is the new version of Geraldo Rivera’s Al Capone’s Vault show but without any drama at all.

  10. Would be cool to see the results of the same experiment carried out by a competent company.
    Microsoft’s approach would be to cram code, fire those that understood and proudly wave the banner of ineptitude.

  11. Unless the salty water could be used somehow to self-produce the electricity needed to power required, I don’t see the benefits of it beyond what you could do with a freshwater install or simply using geothermal to harness the 55F degree temperature of the earth 8 to 12 feet down on conjunction with a heat pump. Then you could still get access to the servers for physical maintenance.

    1. No, water circulates, in a phenomenon I think is called “currents”, so hot water rises to the surface and disperses so wide it can’t impact much, and cold water is pulled from the bottom to replace the hot water. This should very efficiently solve the issue of datacenter cooling.

    2. If I was designing this, I would make minimalist submarines, that can automatically come back to the surface when asked to, for repairs/upgrades/inspection, then go back down to the ocean floor and start doing their work again.

      If this was done off the coast of NYC, I expect it’d be profitable and bring a lot of latency advantages to people/bots like high speed traders, meaning this could be sold at a very high price to financial institutions/companies.

      It’s very possible hiring scuba-diving server technicians would be a much cheaper option though. Want us to look into that option?

      Cheers!

  12. Why not use the excess heat from the cooling for hot water of a city? A computer is essentially a electric radiator converting electricity to heat, instead of wasting it they could have used it…

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.