None of this is particularly new. Horizontal submerging has been an area of research for at least 20 years. Spinny disks were a pain to do, because they need a certain amount of air to not have head crashes. (Sealed HDDs existed I'm sure. )
The problem is that its not that practical. Firstly it much less dense than upright racks, secondly its almost certainly more heavy.
What is happening now is direct watercooling to the rack. Currently in widespread use is effectively a back of rack "car" radiator (ie it looks like a rack sized car radiator), where the coolant is piped to the back of the rack.
Another, newer, but needed for high density GPU, is coolant block direct to component cooling. Where you have a coolant manifold on the back of the rack with each server plumbed direct into the coolant loop.
However, that doesn't solve the problem of what to do with the heat once you have it.
> However, that doesn't solve the problem of what to do with the heat once you have it.
Unfortunately they are not usually in great locations for this since you need other buildings, but if you have an ambient temperature district heating and cooling system for buildings nearby (sometimes called a 5th gen DHC system [1] - mainly characterized by the notion of prosumers ie producer/consumers of heat connected to a single loop system, rather than a more traditional supply loop/return loop topology) then datacenters can very easily fit in to the mesh as producers. Temps are typically 10-20c for ambient loops. The main insight is that that operating range is one that can be used quite well for both extracting heat and dumping heat simultaneously at different locations in the network.
You can also of course just add the heat directly to a hot water supply loop, eg [2]
Less that it's not practical, more that it's too revolutionary to be accepted. Don't want to take too many risks with your new billion-dollar datacenter contruction.
These make much more sense when you look at the energy contracts available where the weather is too hot for "traditional" cooling. Whether that will overcome the natural hesitancy is anyones guess, but probably a nice little gap for someone to disrupt.
> However, that doesn't solve the problem of what to do with the heat once you have it.
Wouldn’t we next see a change to the design of data centers themselves (buildings, cooling systems, etc) that house these heat dense racks, to provide more capacity in removing heat?
Article doesn't even mention the Cray-2 that used Flourinert for cooling.
Flourinert turned out to be really bad for the environment, so I wonder if the article was meant to be how they have a new cooling fluid that is safer?
Ah, Fluorinert. I remember first seeing that on Beyond 2000, where they demonstrated its properties by immersing a running Macintosh into an aquarium tank full of the stuff.
The predecessor to today's NUCs and small-form-factor PCs was the Ergo Brick, which was available in 386 and 486 models in the early 90s. Inside the Brick was a bag of Fluorinert, pressed to the mainboard, to facilitate cooling of the components within its constrained space.
This was used as a plot point in The Abyss (1989). The rats in that movie were actually breathing Fluorinert, but they used special effects for the human actors.
The Abyss also may have inspired the use of LCL as a breathing fluid in Evangelion.
They are using Submer's SmartCoolant: https://submer.com/smart-coolant-liquid/. In short, this appears to be a test of an existing commercially-available product.
I’m guessing it has to do with concentration. Like d-Limonene is a fantastic, natural, biodegradable degreaser… but you need PPE to work with it in any prolonged way.
and a massive pain when you wanted to repair something! From what I was told by an old tech was you had to drain the affected area do the replacement, fill up, power on and do a test.
If you forgot something, or it didn't work, rinse and repeat.
Maybe I'm missing something but submerging computers in a non-conductive and non-corrosive liquid has been tried many times before.
I'm not sure what the current state of this approach is, but it's not wide spread. What makes this particular attempt special as compared to other attempts?
1) having liquid instead of air as the first heat transporting medium
2) having water evaporation chillers instead of air heat exchangers and maybe heat pumps to chill the (last) heat transportation medium.
I like the idea of entering a datacenter without hearing protection, but would fear the occurrence of a meltdown when somewhere a loss of fluid occurs.
The problem is that its not that practical. Firstly it much less dense than upright racks, secondly its almost certainly more heavy.
What is happening now is direct watercooling to the rack. Currently in widespread use is effectively a back of rack "car" radiator (ie it looks like a rack sized car radiator), where the coolant is piped to the back of the rack.
Another, newer, but needed for high density GPU, is coolant block direct to component cooling. Where you have a coolant manifold on the back of the rack with each server plumbed direct into the coolant loop.
However, that doesn't solve the problem of what to do with the heat once you have it.