ADATA SSD Gets Liquid Cooling, But Not Everyone’s Convinced

Solid-state drives (SSDs) were a step change in performance when it came to computer storage. They offered incredibly fast seek times by virtue of dispensing with solid rust for silicon instead. Now, some companies have started pushing the limits to the extent that their drives supposedly need liquid cooling, as reported by The Register.

The device in question is the ADATA Project NeonStorm, which pairs a PCIe 5.0 SSD with RGB LEDs, a liquid cooling reservoir and radiator, and a cooling fan. The company is light on details, but it’s clearly excited about its storage products becoming the latest piece of high-end gamer jewelry.

Notably though, not everyone’s jumping on the bandwagon. Speaking to The Register, Jon Tanguy from Crucial indicated that while the company has noted modern SSDs running hotter, it doesn’t yet see a need for active cooling. In their case, heatsinks have proven enough. He notes that NAND flash used in SSDs actually operates best at 60 to 70 C. However, going beyond 80 C risks damage and most drives will shutdown or throttle access at this point.

Realistically, you probably don’t need to liquid cool your SSDs, even if you’ve got the latest and greatest models. However, if you want the most tricked out gaming machine on Twitch, there’s plenty of products out there that will happily separate you from your money.

19 thoughts on “ADATA SSD Gets Liquid Cooling, But Not Everyone’s Convinced

  1. Color me skeptical.

    The m.2 slot just doesn’t provide all that much bus power(numbers seem to vary slightly; but the highest one I’ve found is in OCP m.2 carrier card specs; which are aimed at least in part at fancy hyperscaler stuff running other PCIe peripherals in the m.2 size; rather than m.2 storage; and even there the requirement is 8.5-14.85 watts per module, which suggests that vendors who don’t want the RMAs from people whose motherboards went a bit cheap on m.2 power delivery are probably well advised not to push their luck too close to that maximum value).

    15 watts over the surface area of a 2280 certainly isn’t ‘just stick a product label on it and give it no airflow’ material; but it’s well within the range of deeply unexceptional heatpipe and fins stuff if there’s even modest airflow at a reasonable ambient temperature.

    1. True, although as a general rule water-cooled builds are less likely to have good airflow since many of the fans are relocated onto radiators (most still will have good airflow with good cases but they won’t have the same direct fan driven flow over the board you get as a side effect from a good air CPU cooler, so in those cases active cooling of some description could make sense, and if you’re actively cooling in a custom loop water-cooled build you might as well use a water block even if you don’t *have* to.

      Self contained AIO loops just for an SSD don’t make any sense even under those circumstances though.

  2. This thing has RGB LED, it is clearly for case modding and/or gaming people who will throw out money for almost everything they think is “cool”. No, thanks, really.

    1. pc modding has become meme at this point. its the sff enthusiasts that are really pushing the boundaries, by removing superfluous components and trying to use less materials in their builds. obviously the components meant for big gaudy builds are a detriment, including oversized gpus and mobos plastered with plastic moldings and pretty cnc machined heat sinks. you are not going to see it behind my emi shielded mesh side panels.

      1. I largely agree. I don’t mind if a few things are overbuilt if it means that the parts last longer without needing a bunch of tiny fast spinning fans or awkward compromises. I don’t like the wasted space spent on plastic covers and stuff that makes it worse at its function. Ideally the system would be silent and emit no light whatsoever while having plenty of expansion and running at a cool temperature for long life. The removal of various kinds of expansion in the name of minimalism or sometimes space savings annoys me – I do not need to spend all my IO just to give all the lanes to one GPU and a couple m.2 drives, even though that’s fine for gamers. Realistically, it’s much more useful for me to have a larger amount of high speed usb, network, and whatever other hardware that made me use a full size desktop computer in the first place – including probably several sata drives, because that gives you much more capacity for the price when you don’t need to load a game 0.3 seconds faster. And I’d like to not have to buy an expansion card to give me the basic stuff the motherboard could have included for much cheaper, like basic front panel IO, post debug info, or bios flash functions and buttons.

  3. “Realistically, you probably don’t need to liquid cool your SSDs, even if you’ve got the latest and greatest models.”

    This ain’t for realism. Did you know you can overclock SSDs? This is for competitions.

      1. There’s no legitimate gaming use for liquid nitrogen cooling, either. But people still do it anyway for competitions.

        https://www.pcgamer.com/overclocking-a-cpu-to-7-ghz-with-the-science-of-liquid-nitrogen/

        It’s not just about bragging rights, though. Sometimes it’s good to know the physical limits of hardware so you can make improvements for the next generation, which is why you’ll often see PC hardware makers sponsoring the competitions.

        1. those competitions are nice and all but most people just want to build gaming machines. and you really dont need to go to such extreme lengths as water cooling to do it. my sff build can do it with air cooling in a 12 liter case. 4k@144hz in e-sports titles. the place where water cooling really makes sense is the data center where you can recover the waste heat for other applications. you could provide heat and hot water in a high rise building with a few datacenter floors, or at larger scales, hot water for industrial applications. it doesnt make sense if you are just going to dump the heat into the environment, at that point cut out the middle man and air cool. my cat approves when she curls up next to the gpu during a gaming session.

          1. Waste heat is an interesting idea in some applications, but data centers aren’t a good fit. These are high security industrial environments with tight environmental controls.

            There is an application for water cooling in data centers though. The demand for more power is ever growing, and the silicon is not keeping up; companies like nvidia have turned to pushing the silicon harder requiring bigger heavier heatsinks. I could see a future where data center hardware, especially anything AI related, goes to water cooling just to keep equipment smaller and lighter.

    1. “Ramsinks” have been pure cosmetics for years, the sole instance where they actually had any utility was in the days of DDR2 FBDIMMs. If you’re buying DIMMS or SSDs with heatsinks, you’re throwing away money on bling rather than performance.

      1. Actually, IIRC if you run samsung b-die at higher clock and power levels, heatsinks are needed. If you run at jedec speeds and voltages, in a well ventilated computer, maybe not. I’m not sure what XMP speeds and airflow conditions start to become an issue. Part of how servers don’t need them is that the airflow is greater and the speeds are slower.

  4. Hmm, what’s the provenance of the idea:-
    “He notes that NAND flash used in SSDs actually operates best at 60 to 70 C”

    How was this determined, what was the Experimental Method & of course his metrology ?

    Since the SSD is based on a trapped charge in a gate for memory which at higher temperature is subject to more thermal noise ie One would expect reliability would be improved at a suitably lower temperature but not too low to affect connections eg solder brittling etc

    ie. SSD’s not that good for long term storage reliability as the charge can degrade over time even be “knocked right out” eg switch polarity due to (likely only high) radiation – even decay from any radioactivity of materials in the device itself. Hence why SSD’s benefit by being read – which I understand automatically refreshes the gate charge. One wonders therefore if a regular scan check is sufficient or should one exploit a program to read each bit of the whole drive as a confidence measure And how often for comfort :-)

    1. I suspect that in practice that 60-70C is more of a ceiling – that is, SSD performance presumably exponentially decays with temperature towards total failure at some point, with 60-70C representing the inflection point where the performance starts falling off fast enough for it to actually matter day to day (ie a 30C SSD will run better than a 60C SSD but the difference won’t be big enough to matter)

    2. “How was this determined, what was the Experimental Method & of course his metrology ?”

      Check out JEDEC. Documentation and test regimes for NAND flash is extensive and exhaustive.

      In laymans terms: NAND operations require a set energy input to ‘flip a bit’. For ‘cold’ NAND, the entirety of that energy needs to come from the electrical current fed to that cell. For ‘hot’ NAND the cell is already at a raised energy state and a smaller current is required to ‘flip’ that cell. By reducing current through the cell, cell wear is reduced.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.