Why Liquid Cooling Won’t Take Off Until the Industry Fixes One Big Problem

A Silent Aisle, A Loud Truth

The data centre aisle was unnervingly quiet. High-density AI servers hummed with an intensity that pushed the limits of what traditional airflow could sustain. In the middle of this familiar landscape sat a single liquid-cooled node: compact, efficient and completely alone. It looked like a glimpse of the future dropped into an environment still shaped by the past.
And the truth became clear. Liquid cooling isn’t being slowed by its technical readiness. It’s being slowed by the fact that nothing in the ecosystem fits together.
Air cooling might be ageing, noisy and increasingly inefficient, but it remains the only cooling method that keeps the data centre vendor-agnostic.
The Promise and the Paradox
Modern AI hardware operates at thermal levels that push air cooling to its limits. GPUs, accelerators and next-generation CPUs run hotter than ever, and liquid cooling provides a clear technical advantage by offering lower temperatures, reduced energy use and more stable performance.
Yet despite this, air cooling remains dominant because it allows true interoperability. Any vendor’s server can sit next to any other vendor’s server without engineering changes or risk. Liquid cooling removes that flexibility because each OEM has its own coolant chemistry, connector design, manifold layout and operational requirements. Once liquid enters the rack, everything becomes proprietary.
Liquid cooling delivers efficiency but removes choice.
The Multi-Vendor Reality the Industry Overlooks
Real data centres are built on variety. Enterprises buy from multiple OEMs depending on performance needs, availability, price, supply chain timing and workload requirements. Air cooling supports that reality effortlessly because airflow doesn’t discriminate between vendors.
Liquid cooling does not behave this way. The introduction of liquid lines, manifolds and CDUs locks you into a single mechanical ecosystem. Bringing in hardware from another vendor means redesigning connectors, aligning pressure and flow requirements, matching coolant types and validating new service procedures. The rack becomes tied to decisions made on day one, which is completely opposite to how enterprise environments typically grow.
Mechanical Lock-In: The Hidden Barrier
The industry has decades of experience standardising electrical and logical interfaces. Rack formats, Ethernet, PCIe, OCP designs and drive form factors all benefit from well-understood universal standards. Liquid cooling has no equivalent level of alignment.

Every OEM takes a different approach. They choose different liquids, different quick-disconnect fittings, different inlet and outlet placements, different cold-plate characteristics and completely different service procedures. As a result, liquid-cooled racks cannot be shared between hardware vendors without custom engineering that most operators don’t want to take on.
This is the single biggest reason liquid cooling isn’t scaling across the enterprise.
Immersion Cooling: Elegant Concept, Difficult Execution

Immersion cooling is often positioned as the alternative that avoids many of these problems. Instead of coping with pipes and connectors, the entire server is submerged in a dielectric fluid. It can be highly efficient, but it introduces a new set of challenges that significantly complicate adoption.
Many OEMs view immersion as an unsupported operating environment unless the hardware, fluid and tank are all certified. Warranty terms can become unclear or void altogether. Routine servicing becomes far more complex because components must be removed, drained, cleaned, dried and resealed before returning to the tank. Material compatibility is also a concern because plastics, gaskets, adhesives and thermal compounds may degrade differently depending on the fluid used. Operators must also manage the fluid itself, ensuring it remains clean, uncontaminated and correctly maintained throughout its lifecycle.
Immersion is powerful, but it requires a completely different operational model that most enterprises are not yet prepared to adopt.
Air Cooling Still Works, But Its Limitations Are Increasing
Air cooling is still officially supported by all major vendors. Systems built around Xeon 6, Gaudi 3 and even Nvidia’s B300 can be purchased in air-cooled configurations. That matters because many data centres are not ready to make the jump to liquid.
However, the trade-offs of air cooling are becoming more severe. To cool modern accelerators, servers now rely on extremely high-RPM fan modules. These fans operate like miniature turbines, spinning at tens of thousands of RPM and drawing significant power. It is increasingly common for a high-density AI server to allocate ten to twenty percent of its total power budget to fans alone. Some systems dedicate between 500 and 1000 watts purely to airflow, before any compute work begins.

This overhead directly impacts rack-level power planning, cooling efficiency, available compute capacity and the overall cost of operation. When multiplied across dozens of servers per rack, airflow begins to consume a substantial portion of the data centre’s power envelope. Energy that should be powering compute is instead used just to move air.
Air cooling still functions, but it does so with rising energy costs, tighter thermal margins, louder operational environments and significantly reduced density. It remains the default only because liquid cooling is too fragmented to replace it.
The Core Bottleneck: No Shared Liquid, No Shared Method
Ultimately, liquid cooling’s biggest challenge is the absence of a universal standard. Without shared coolant chemistry, shared connectors, agreed-upon pressure and flow requirements, standardised cold-plate specifications and unified service procedures, liquid cooling cannot offer the interoperability that air cooling provides by default.
Air cooling works everywhere. Liquid cooling works only within a vendor-specific ecosystem. That incompatibility is the reason adoption remains slower than the technology itself would suggest.
Looking Ahead: The Standard That Unlocks the Future
Liquid cooling is essential for the next generation of AI-ready data centres. But its widespread adoption depends on shared standards that allow multi-vendor racks to become the norm again. Once coolant types, connectors, manifolds, temperature ranges, service models and safety protocols become consistent across the industry, liquid cooling will no longer lock customers into a single path.
At that point, the benefits of liquid cooling will be realised without compromising the flexibility operators rely on. Until then, air cooling continues to dominate simply because it integrates easily into every environment.
Liquid cooling provides superior thermals. Air cooling provides universal compatibility.
The industry must bridge that gap before liquid cooling can truly scale.
