TOKYO, Oct 31 (News On Japan) - As generative AI rapidly expands, new cooling technologies are becoming essential. AI servers running these models are equipped with numerous GPUs, generating immense heat, and attention is now focused on liquid cooling technology.
Nvidia CEO Jensen Huang has called liquid cooling "a major transformation for the AI industry." To harness this technology, innovations such as container-type server rooms have emerged, while researchers at the University of Tokyo have developed one of the world’s most efficient compact cooling systems.
Traditional air-cooling methods, used by over 90% of Japan’s data centers, are reaching their limits—handling up to about 45 kilowatts per server rack. In contrast, the latest AI servers may consume over 100 kilowatts, several times more than conventional cloud servers. This has spurred a shift toward liquid-based systems, including water and oil cooling, which are dramatically more efficient. Water, for instance, conducts heat about 23 times better than air and can improve cooling efficiency by more than 3,500 times, reducing data center power consumption by up to 40%.
There are two main types of liquid cooling: direct water cooling and immersion cooling. In direct systems, a cooling plate is attached to heat-generating components like CPUs, circulating coolant through a device known as a CDU. Immersion cooling, meanwhile, submerges entire servers in electrically insulating fluid. While immersion offers higher efficiency, it presents maintenance and warranty challenges—such as the need to dismantle fans and encase the hardware in specialized tanks, modifications that may void warranties. Building designs also pose risks, since a leak in traditional multi-story data centers could flood lower floors.
To overcome these issues, companies are exploring container-type data centers, which are better suited for liquid systems and can be placed directly on the ground without major waterproofing measures. NTT Japan has announced its entry into this sector, while Gontam Mesh has developed a container-based immersion cooling server. Nvidia is also incorporating its servers into such units, signaling strong confidence in this approach. As Huang emphasized, the move toward liquid cooling represents a fundamental transformation for the AI industry. With efficiency, scalability, and sustainability all at stake, the “heated” battle to cool AI infrastructure is now one of the most critical races in global technology.
 




















