If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.
A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.
The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.
It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.
header image: Christopher Bowns, CC BY-SA 2.0.

Nicky Tesla is rolling in his grave.
I love how the solution to the data center “problem” which both does and does not exist is going back to early 1900s electricity provisioning. Like they are so close to getting it.
Sure the answer is DC. No the answer is not filling all the office spaces in the world with trillions of dollars of commodity hardware. Maybe wait until enough time has gone on that you actually have software and hardware worth running. By then you won’t need to have rooms ready to burn down the second a mouse chews a wire.
How people can’t see the current ai paradigm as a billion dollar company commoditizing it’s complement so people buy into it’s expensive inefficiencies in an act of complete dependence blows my mind. It really is the AC vs DC wars yet again. There is no formal reasoning that requires or suggests that the infrastructure trillions of dollars are being blundered into is even required let alone that we should try to scale it with corner cutting. People just can’t wait 5 years for the inevitable outcome of them already missing the right trajectory.
Brain worm???
I have several. Ones name is Janet the other is Charles. They haven’t introduced me to their friends yet though
That’s what blows my mind as well. Surely people are not out there thinking email writing or web page summary generation or funny cat photo or video generation are such an economically valuable tasks that AI companies are scrambling to get more compute and memory for them? Will most people who use LLMs to generate “todo travel list” or similar jobs pay for the service? I doubt it.
The only overwhelmingly productive activity that LLMs do is writing code. And make no mistake, the only reason why it’s overwhelmingly productive is because people are paid more than they should be paid to write code, it has been going on for decades. So in the end LLMs look like an okay alternative.
And I’ll be honest here, anything that’s very reasoning heavy or doesn’t have much training data available, LLMs fail to do. Mechanical engineering, electrical engineering and design etc. LLMs absolutely fail. Only text heavy tasks they do well
Agreed, the killer app for a trillion-dollar infrastructure buildout should probably clear a higher bar than “helps someone who couldn’t be bothered to open Tripadvisor.” But the “text-heavy tasks only” framing is doing some heavy lifting. PCB layout and mechanical CAD aren’t LLM territory yet, but failure modes in structured reasoning are rapidly becoming an empirical question rather than a philosophical one. People said the same thing about chess, then Go, then protein folding. The goalposts have a well-worn groove in the turf at this point. The fact that current LLM’s are extremely good at coding is not surprising, partly because the entire history of Stack Overflow is basically a giant RLHF dataset that humanity crowd-sourced for free over 15 year. As for reasoning, the increase in score of LLMs in Humanities Last Exam is something that could not have been achieved without a big improvement in reasoning. There is enough room for further improvement, but history suggests we’re bad at predicting where the ceiling is.
I disagree. It’s not that we are getting paid too much. The problem is you are being paid too little for the work that you are doing.
I’d prefer to relieve people of the ability to buy their next luxury talking car, summer mansion with a swimming pool and a yacht and put some of the human effort that would go into building those towards fulfilling needs of people who are actually keeping the world running day-to-day.
But sure, let’s compare who among the working people has it worse instead.
Around 80% of the people in the US are working in services – offices, retail, sales, hospitality, media, entertainment, finances, advertising, government, etc. Not many are actually building houses or yachts. Same thing in most advanced economies. 72% in Germany, 78% in France, 79% UK… they’re all very similar. We’re running post-industrial and urbanized economies with lots more people than actual necessary work to do, but a lot of money to be made if you work hard for an excuse to demand it.
So, people are generally not doing the kind of stuff that’s keeping the world running. We’re largely just serving things to each other, taking a cut form the middle, many people in multiple layers and levels, with taxes on top.
If you gave a pay raise to the people who do keep the wheels turning, very little would go to “working people” in the traditional sense because there aren’t many left in that category, being displaced by imports, outsourcing and automation. For example, there are more people working in TV and film than farming, mining, and oil or gas extraction all together.
https://labor411.org/411-blog/unionized-hollywood-employs-more-workers-than-farming-mining-and-oil-extraction/
The killer app is search. The LLM based search can provide answers without passing you off to the linked sources we used to get. How often do you now get what you need without clicking through to somebodies blog or a forum or a tutorial page, etc? And as the hits on those sources drop don’t they drop down the search results? When you add smart advertising to the results, or sell the search topics to advertisers, etc. They keep you on the LLM site, which is the only goal. Like the way Facebook is set up to make you click 3 or 4 times more than necessary to do anything. They sell clicks and retention rates.
I’ve had an LLM search make a claim that was the exact opposite of its reference. LLM has a long way to go.
Don’t forget that before long, the LLM guardrails will also include prompts like “Do everything possible with everything you know about the user personally to manipulate them to believe that sponsored product “XYZ” is really the most reasonable path and that whatever dynamic price that was also determined based on what the AI can imply really is the best possible value. This is absolutely where it’s going, among other vile places.
That and crime. Just got the email at work reminding me that the old trick of spotting subtle spelling and grammar errors in an email are now totally irrelevant. Ah yes, of course that’s true, isn’t it?
Can’t wait, but at least we’ll be saving a bit of electricity compared to current commodity hardware.
Ah, another poster who likes to be vehemently wrong.
LLMs are very poor at writing code. The reason that programmers are paid well (not more than they should) is that programming is hard.
lol. Yes, LLMs are so awful at writing code that I, as someone who currently knows nothing about even the formatting structure of code, much less the most remote idea wth void main(); even means (I learned to code in high school, hated every second of it, forgot everything about it and refuse to elaborate further.) and yet I’ve abused LLMs enough to produce 5 fully functional applications which previously did not exist, for everything from rolling my own GPS 1PPS time source, to a bespoke android application to interface with another application on a raspberry Pi that I use for radio things.
Could you do better? Zero doubt…. but what did this cost me? This atrocity I performed cost me about ten hours of my life that I would’ve spent doing something equally unproductive.
This garbage absolutely has its uses however the economic impact of what anthropic and the like are doing is wholesale unconscionable. Multi-billion dollar companies, tens of thousands of employees, and not a single person with any kind of authority stopped to think “Will signing a deal to procure like HALF of the worlds ram supply have any sort of down-stream consequences to anyone but us?” 12 months ago I bought a microcenter part number 329474 on sale for $65, retail was $89 at the time I believe. Just go look at what the retail price of that piece of garbage drive is today. $400.
“If we could replace all non-executives wih ‘AI’ agents we could be making ALL the money”
That’s why.
It has nothing to do with consumers.
And to a smaller extent the investor Bros who only care about putting money in and taking more money back out as quickly as possible.
It’s unfortunate that those two types of people currently hold ALL the power, and are allowed to steer the whole global economy in that direction.
Consumers are simply being placated with chat bots and regurgitated cat pictures while those in power pray at the altar of greed that if they keep pumping money into the magic box it will EVENTUALLY be capable of making their dreams work.
Note that those executives haven’t figured out who their customers will actually be.
Jobless people typically don’t have much income to be spending on consumer goods.
That’s a tomorrow problem.
“Maybe wait until enough time has gone on that you actually have software and hardware worth running.” Who builds this better hardware and software if all the potential customers are waiting for something better?
Why not 400Hz after the Mains-to-battery stage? (440Hz and everybody can stay in tune.) This is how they kept the weight down and efficiency up in aircraft forever. I vote no for DC greater than 48V due to the problem of not being able to let go of a shock hazard, and how shaking electrons back and forth is much easier than shoving them along a wire.
I don’t understand your first question. I think my answer to that is, it’s already being built.
I don’t see why any large portion of a building should ever be wired for relatively high voltage DC. Violent agreement. What I mean by saying sure the future of this is DC is because, surely DC will drive compute operations. As in PSUs on racks will continue to transform AC into DC. Although an AC powered CPU would be interesting. Sorry I can be pretty cryptic unintentionally, it’s a work in progress. Most of the time I assume no one will read what I write so I don’t flesh everything out.
Wire the GPUs in series so that there’s no need to convert high voltage DC to AC to low voltage DC.
David Kirtley spoke on the Lex Fridman podcast on nuclear fusion directly producing DC and a deal with Microsoft to power a first DC on this manner.
No clue who those people are. Sounds really unusual. Almost all power generation is AC for really good reason. I am curious how they plan on doing that, but it also sounds nonideal. I feel like if I did a back of the envelope calculation I’d see significant power losses in doing that with greater risk. I’m also not an expert in either field, so maybe j am just a goof.
Kirtley is CEO of “voodoo fusion” company Helion, backed by $500M from Sam Altman (OpenAI) and his buddies. Said to be worth $3B now, without having delivered a single electron.
There was talk years ago about directly capturing the high velocity alphas coming off the reactions, and producing super high voltage DC current from that. Which is sheer lunacy (megavolts DC), maybe actually impossible.
I’m totally guessing they just mean pushing “relatively” low voltage DC from an on-site fusion-heated steam turbine generators&rectifiers directly into a nearby datacenter. Though at the hundreds of megawatts they are talking about it’s still going to be a hundred kV or so, so “conventional” HVDC gear.
It could save a bit of copper and allow smaller transformers, since they don’t have to run at 60 Hz to bring it down to the rack voltage, but it would be interesting to see the actual cost and efficiency differences.
I know Tesla vs Edison battle but that was long time ago – long before semiconductor revolution with power semiconductors being fairly new. Today we know ways to step up and step down voltages at high currents DC to the point where we build HVDC transmission systems – so we rectify current for transmission to invert it at destination. Most of our devices are DC powered and we might be ready to go with BLCD motors where we used inventer+motor previously. It might be time to revise some arguments form the past.
I understand we have considerably more efficient ways to step down DC then 100 yrs ago. For long distance transmission I see no way DC will ever be efficient compared to AC. For low voltages at short distances sure, it’s now cheap to efficiently step up/down DC loads. At high voltages i don’t know that that’s true. I just have a hard time believing that the risk/maintenance is even worth the cost.
Wikipedia is not the best source but let’s use it as starting point:
“A long-distance, point-to-point HVDC transmission scheme generally has lower overall investment cost and lower losses than an equivalent AC transmission scheme.”
https://en.wikipedia.org/wiki/High-voltage_direct_current
Data centers use mostly DC:
– servers use DC
– HVAC can be done fully in AC
– all servers use DC
– light can easily be done with DC
I guess only AC would be regular appliances like coffee machines or desktop computers and laptops.
Would it be better? I don’t know but definitely we should at least consider that.
A DC DC.
HA!!
-48vdc power distribution has been an option for a long time in server systems and has already proven some mild benefit to using rack scale power converters. It’s actually just an inherited standard from telcos, so the techniques have been around for a looooong time.
Will moving to an 800vdc system help bring this to dominance over AC in the data center? Maybe, but probably not enough to outweigh the increased safety hazards and maintenance costs from stressed components.
48 volts is way too low to be practical for the amount of power needed. It would take massive busbars. I’m surprised they would use 800 volts though. I would have expected around 350-400 volts since that’s what the active power factor correction circuit in a standard server power supply puts out.
800v is what new EVs use, might be related. Might not.
Power used in a wire squares with voltage, so the higher the volt the more amps you can push without burning through.
I’ve worked in large telephone exchanges that were all -48v powered and yes they had enormous copper bus bars running through, sometimes bigger than the girders the building was built with.
However I’m not convinced that’s really a major factor given the costs involved with building a data centre, and the efficiencies it allows – for example, going back almost forever, the exchange AC power comes in and is transformed & rectified down to 48v and floated across an enormous bank of batteries, which made a huge “UPS” with minimal actual hardware.
I forget which, but usually one bus bar was copper and the other was aluminium, presumably as a cost saving.
Probably the same reason it’s used in EV charging: getting as high as possible to reduce losses while staying well below limits for high voltage, which brings in additional safety regulation and costs.
“But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context.”
I disagree. The proliferation of data centers is happening regardless if it should be happening or not. Anything that makes them more power efficient also reduces the environmental impact and that is a victory even if it is a small one.
I would invite you to reread the first two sentences of your second paragraph at 2 different times today. Pretend like someone else wrote them. See if anything jumps out at you.
I’m not sure if you’re being a passive-aggressive grammar pedant. All I can find fault with is Zangar’s grammar — which is perfectly understandable, if not perfect.
Sorry if you read any aggression. Definitely not the point. Similarly, nothing wrong with the grammar. Mine is terrible.
“I disagree.” – means I do not agree with the statement that I quoted
“The proliferation of data centers is happening regardless if it should be happening or not. ” – means that data centers are inevitable.
Maybe I will re-visit that a second time as you suggested but I have no idea what you want me to see.
No, the second sentence by itself does not necessarily build on the first by itself. But they were never meant to be taken as a whole thought without the following sentence.
“Anything that makes them more power efficient also reduces the environmental impact and that is a victory even if it is a small one.” – A more efficient datacenter will use less electricity and thus generate less CO2 and other pollutants.
So.. I disagree with the statement that it is hard to get excited about this because I recognize that it makes a bad situation (which was going to be bad anyway) slightly less bad. I don’t see any reason not to be excited about something being less bad or in other words “better”. Why would one not be excited about something being better? What would you like me to see that is wrong with that?
Ahhhh!! So using more power than we have available currently in generating and distribution is OK as long as its clean, vs just continuing the way we are without the datacenter? That is a wild theory.
Your assertion is based on the idea that ‘making them more power efficient’ means they use less power but remain the same size they would have been in the less efficient mode. Instead, the actual result is bigger or denser datacenters. The environmental impact is unchanged; power distribution efficiency just alters the limiting reagent in the equation.
Your assertion seems to be based on the idea that electricity is the only resource limiting the growth of datacenters. I am not a datacenter accountant. But… Given the way hardware prices have grown with datacenter proliferation I doubt it.
Also, the bubble can’t last forever. Eventually the amount of “datacenter” will be determined by market forces just like everything else. I can’t wait for those hardware auctions! Maybe all those buildings coming up for sale will depress commercial property prices enough that we will see hackerspaces proliferate. I doubt that though… there are too many tax breaks for just leaving a building empty. And then there is insurance…
Anyway… more efficient power uses means cheaper operation. That means more services will be purchased so yes, at that point given more efficient power use there will be more “datacenter”. But that additional demand that is getting satisfied is the lower priority demand that otherwise would have gone without. That means it won’t completely make up the difference. The power usage should level off somewhere less than it would have been given the less efficient use but more than it would be given the naive assumption that the efficiency gain goes 100% into power savings rather than additional buildout.
Are you aware that virtually nothing in “An Inconvenient Truth” has come true? There is a lot of concern over environmental impact of data-centers in HaD responses. Where does this come from? What if the center has a solar and wind farm associated with it? And what effect will the new 18 Angstrom chips have on this?
My local small server runs on a MicroPSU and a 90% brick for years now.
It makes sense to me, specifically because data centers are at a scale where small gains are big, and also because data centers are often very bespoke. In my house i appreciate that almost everything i might buy wants to plug into a standard 120VAC socket but at datacenters they often order thousands of units of weirdo computers all at once. If big players like Google and Amazon are using bespoke CPUs, i don’t see why they wouldn’t also use bespoke PSUs and distribution.
Only question on my mind is what’s the most efficient way to deal with the multitude of DC voltages required by a modern computer…i think 5V is still indispensible, but maybe you can get away with 3.3V instead…do you even want 12V anymore? You certainly need some sort of converter for like 1.2V or whatever (dynamic?) voltage that CPU cores run at these days…i don’t know. But a data center is the one place you might meaningfully make a millions-of-dollars architecture decision around an assumption like whether every computer in the space has an SSD or an HDD or whatever. Any other context, such a decision would seem woefully short-sighted.
12v is in some ways the most useful of the three because it’s the predominant “high” voltage in computers, so you can move the same amount of current along smaller traces/wires, then convert it down to the required lower voltages right at the CPU/GPU/etc. Especially for AI, those modern components need an incredible amount of current at a very low (and indeed dynamic) voltage, so it wouldn’t make sense to have the voltage regulators farther away.
In fact, there is a movement away from 12v in datacenter servers, but it’s generally to higher voltages, like 48v/54v, so they can reliably deliver even more power to the latest generation of hungry processors.
also perhaps worth mentioning that server power supplies often just deliver a single voltage (12v / 48v) and let the motherboard handle any general conversion needed to 5v/3.3v/etc.
I’m not sure what extent this affects energy efficiency for better or worse – my guess is it’s more about design & manufacturing efficiency (and end-user convenience) to make a whole lineup of servers take the same series of hot-swappable PSUs.
“they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC.”
Both the Hackaday article and the IEEE Spectrum say this, but I would still assume they just pass the input AC directly through to the servers unless there’s a power outage or other power quality problem. If not, that seems like a huge waste that could be avoided whether or not they switch to DC.
I like your article regarding DC in data center, I have written an article regarding DC mixing issues in the Substation https://www.substationfaults.com/dc-mixing-issues-in-electrical-substation/
Many server environments prefer the efficiency hit and always run on the inverter just for power stability. I’ve got two rather old serve room style UPSs that each contain standard 50 12Ah lead acid batteries that gives the option of running either way.
It depends – UPS at my work have “bypass” mode for maintenance. When “online” they go through all conversions. I guess time required to switch is sometimes critical. Not sure but I think there also UPS that actively discharge and charge batteries to keep them “alive” longer.
Vertiv and Eaton have always been present. We have had 380VDC server proposals for many years. Consider that a modern switch mode power supply can accept both AC or DC input. It doesn’t care! But the issue has been around connector selection and safety. Pulling apart a live and loaded 380VDC connector will have an arc issue as there is no zero crossing to extinguish it. What is possible from a regulatory/safety perspective in China, EU, and North America are very different with North America being the most restrictive and litigious!
-54V (“48V” typical) has always been the preference of Telco’s for their minor server requirements, but quickly over-reach reasonable distribution losses. Lets expand on this…. The I2R cable losses feeding cellular radios up a 350′ (107m) tower at 54V source can be 10% to 20%, even with #6 or #4AWG. 5G radios are power hungry, each drawing up to 2000W peak. This DC over thick copper is Edison era power distribution! Imagine AC/DC switch mode power supplies on these radios that can function at either 277VAC or 390VDC. However like for servers, a plug ‘n play connectorized distribution would be required. Must be safe. An NEC class 4 Fault Managed Power System is a possibile solution.
In all these cases, the conversion electronics inefficiencies are issues. Bare back is powered from battery… most efficienct. If the battery is 800VDC (or 380VDC), how many layers of conversion to get down to 3.3V in a server?
Voltage above 100V is highly unsafe leave alone the 420 or 415 or 440V that we use currently. 800V will be a safety hazard
Safety procedures enforced by hardware interlocks will keep you safe. I worked with 6MW drives operating at 3.5KV and it really would take my invention and deliberate tampering with interlocks to open cabinets when not isolated (physical key), discharged (time interlock), grounded (physical key). Plus formal procedure to test everything with HV Probe on a 2m long rod before installing additional grounding wires.
Below 1000V I worked with crane drives – 800V multidrives. Restrictions were less strict but still you will not open running 800V cabinet (no matter PSU, drive or rectifier) without strong will to do so.
I guess that data centers has their electrical team to deal with switchboards so they will have to train them in HV – usually everything above 1000V is considered as high voltage in terms of safety.
I thought Google did this already. including putting batteries in the motherboards for ups.