Air cooling hits a hard limit at 41.3 kW per rack. Above this point, the amount of air required to remove heat exceeds any practical system’s capacity, leading to noise problems and unstable temperatures that engineering cannot fix. Liquid cooling offers better thermodynamics, but the price is steep. Retrofitting costs $2–3 million per megawatt. Deciding between air and liquid cooling affects not only infrastructure budgets, but also competitiveness in AI markets where even milliseconds matter.  

December 2025 update: This year, liquid cooling moved from a cutting-edge option to a standard practice. The data center liquid cooling market reached $5.52 billion in 2025 and is expected to grow to $15.75 billion by 2030. Now, 22% of data centers use liquid cooling, making it a core part of infrastructure. Direct-to-chip cooling leads with 47% market share. Microsoft started rolling out liquid cooling across Azure campuses in July 2025 and is testing microfluids for future use. Colovore opened a $925 million facility that supports up to 200 kW per rack. New AI chips like NVIDIA H100/H200 and AMD MI300X produce over 700 W per GPU, which air cooling cannot handle. As a result, hybrid systems that use both air and liquid cooling are becoming the norm.  

Data centers worldwide consume 460 terawatt-hours of energy each year, and cooling accounts for 40% of that in traditional setups. NVIDIA’s latest GPU roadmap shows power use doubling every two years, reaching 1,500 watts per chip by 2026. Organizations now face a turning point: small improvements to air cooling cannot keep up with the rapid rise in heat. The choices made now will set operational costs for the next ten years.  

Microsoft invested $1 billion to retrofit its facilities for liquid cooling after finding that air-cooled systems could not handle GPT training workloads. Amazon Web Services uses both methods, relying on air cooling for storage and CPU tasks and liquid cooling for GPU clusters. These different strategies show that no single cooling technology fits every need, and making the wrong choice can leave companies with costly unused equipment.  

The Physics Behind It All 

Air holds 3,300 times less heat per unit volume than water under normal conditions. This fact shapes every cooling choice in today’s data centers. To move one kilowatt of heat with air, you need 100 cubic feet per minute (CFM) of airflow with a 10-degree Fahrenheit temperature rise. For a 40-kilowatt rack, that means 4,000 CFM, which is about as fast as the wind in a Category 2 hurricane in the cold aisle.  

Water’s specific heat capacity is 4.186 kJ/kg · K. So just one gallon can absorb as much heat as three thousand cubic feet of air. At a flow rate of 10 gallons per minute, you can cool a 100-kilowatt heat load with a 20-degree Fahrenheit temperature rise. To do the same with air, you need 10,000 CFM, which would be extremely noisy at 95 decibels, and use 25 kilowatts just for the fans. As equipment gets denser, water’s advantage only grows.  

Heat transfer coefficients clearly show the difference. Air-to-surface convection ranges from 25 to 250 W/m².K depending on the air velocity. Water-to-surface convection is much higher, from 3,000 to 15,000 W/m².K, which is about 60 times better and allows for much smaller heat exchangers. When liquid contacts the chip directly through cold plates, the rate exceeds 50,000 W/m².K, approaching the best possible conductive heat transfer.  

Temperature differences amplify these benefits. Air cooling requires a 30 to 40 degree Fahrenheit gap between the incoming air and the component to move enough heat. Liquid cooling works with just a 10 to 15 degree Fahrenheit difference, which keeps the components cooler, reduces leakage current, and makes them more reliable. According to Arrhenius equation modeling, lowering the operating temperature by 10 degrees Celsius can double the component’s lifespan.  

Altitude and humidity also limit the effectiveness of air cooling. For example, Denver’s high elevation lowers air density by 17%, so you need more airflow to get the same cooling. In high-humidity environments, condensation can form when cold air meets warm surfaces, causing serious damage to equipment. Liquid cooling doesn’t depend on surrounding air, so it works reliably anywhere from Death Valley to the Himalayas.  

Air Cooling Technologies and Their Limits 

For 40 years, traditional raised-floor air-cooling was the standard in data centers because it was simple and reliable. Computer room air conditioning (CRAC) units push cold air under raised floors, which then move it through perforated tiles into the cold aisles. Servers pull in this air and release it into the hot aisles. This setup works well for three to five kilowatts per rack, but once loads exceed 15 kilowatts, hot-air recirculation becomes too much for the system to handle, causing cooling to fail.  

Hot-aisle and cold-aisle containment makes cooling more efficient by preventing hot and cold air from mixing. Using plastic curtains or solid panels to separate these zones helps maintain temperature differences, which boosts cooling performance. When done right, containment can cut cooling energy use by 20 to 30 percent and increase cooling capacity by 40 percent. Google’s data centers have achieved a PUE of 1.10 with advanced air-cooling and full containment, demonstrating what’s possible when technology is used effectively.  

In-row cooling places refrigeration units closer to the servers, shortening the air path and reducing fan energy use. Vertiv’s CRV series places cooling units between server racks and can handle up to fifty-five kilowatts per unit. Schneider Electric’s in-row coolers offer similar capacity and use variable-speed fans that adjust to the heat load. This method works well for medium-density setups, but it needs one cooling unit for every two or three server racks, which takes up floor space.  

Rear-door heat exchangers are among the best air-cooling options for higher server densities. These units, which can be passive or active, attach to the back of server racks and cool the hot air before it enters the room. MotiveAir’s chilled door can handle up to seventy-five kW per rack by circulating chilled water. This technology maintains the usual airflow while removing heat at the source. However, installing these exchangers needs careful alignment, and the extra door weight can be a problem for older racks.  

Direct expansion (DX) cooling removes the need for chilled water systems by sending refrigerant straight to the cooling units. This simplifies and improves efficiency for smaller data centers. However, the risk of refrigerant leaks and limited ability to scale have slowed its use. Facebook stopped using DX cooling after leaks led to several facility evacuations and switched to water-based systems instead.  

Liquid Cooling’s Expanding Taxonomy 

Single-phase direct-to-chip cooling is the most common liquid-cooling method today because it is reliable and relatively simple. Cold plates attached to CPUs and GPUs circulate coolant at 15 to 30 degrees Celsius, removing 70 to 80 percent of the server’s heat, while fans remove the rest. AC attacks in the rack CDU system can support 120 kW per rack and include backup pumps and leak detection. This technology needs only minor changes to servers, so it can be added to existing setups without replacing hardware.  

Two-phase direct-to-chip cooling uses refrigerant phase changes to remove more heat. The coolant boils at about 50 degrees Celsius on the chip’s surface, and the vapor carries away the heat. ZutaCore’s waterless DLC cools up to 900 watts per GPU using low-pressure refrigerant R-1234ze. Because boiling is self-regulating, it maintains steady temperatures even when heat loads change. However, the system is complex and refrigerant costs are high, limiting its use.  

Single-phase immersion cooling cools servers by fully submerging them in a dielectric fluid, eliminating the need for air cooling. GRC’s IceRAQ systems use synthetic oil to maintain an inlet temperature of 40 to 50 degrees Celsius. Submerge SmartPod uses a similar method with biodegradable fluids and can handle 100 kW in just 60 square feet. Immersion cooling eliminates the need for fans, reduces failure rates, and enables very high server density. However, the fluids cost $50 to $100 per gallon, and servicing the equipment can be difficult, which slows adoption.  

Two-phase immersion is the most advanced cooling technology available. 3M’s Novec fluids boil at carefully controlled temperatures between 34 and 56 degrees Celsius, keeping component temperatures steady. Microsoft’s Project Natick demonstrated that two-phase immersion can handle heat fluxes of 250 W/cm², which is 10 times higher than air cooling can manage. BitFury uses 160 megawatts of two-phase immersion cooling for cryptocurrency mining, demonstrating that the method can scale up despite the fluids costing $200 per gallon.  

Hybrid approaches combine technologies for optimized cooling. Liquid cooling handles high-power components, while air cooling manages memory storage and networking equipment. HPE’s Apollo systems use this approach with direct-to-chip cooling for processors and traditional air cooling for anything else. The strategy balances performance and cost, but requires managing two parallel cooling infrastructures.  

Moving Ahead Calls for Careful Planning. 

Choosing the right cooling technology is a key decision that impacts all parts of data center operations. This choice shapes how you design your facility, pick equipment, and run daily tasks, and stay competitive for years to come. It’s important to consider not only what you need now, but also how your workloads, regulations, and technology might change in the future.  

Air cooling still works well in certain situations, such as enterprise data centers with moderate power needs, edge sites with limited space, and locations that only occasionally require high power. Because air cooling is a mature technology, costs are predictable, and expertise is readily available. New advances in containment, airflow, and heat recovery help keep air-cooling useful even within its physical limits.  

Liquid cooling is now essential for AI systems, high-performance computing, and any setup with more than 40 kW per rack. Its efficiency becomes even more valuable as energy costs and carbon taxes go up. Companies that switch early benefit from higher density, better reliability, and lower operating costs, which can offset the higher upfront investment.  

Introl guides organizations through cooling technology choices with full assessment, design, and implementation services. Our engineers review your current setup, plan for future needs, and create migration strategies that minimize disruptions. Whether you want to improve air cooling or move to liquid cooling, we offer solutions that balance performance, cost, and risk for your global operations.  

The real question is not if you should use liquid cooling, but when and how to make the switch. Companies that stick with air cooling will see higher costs and lose their edge as workloads grow. Those who adopt liquid cooling now will be ready for a future where high computing power sets leaders apart. The science is clear, so the decision is up to you.

Source: Liquid Cooling vs Air Cooling for AI Data Centers: 2025 Analysis 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *