Asset Classes

Data Center Cooling in 2026: Technology Options, Site Constraints, and the AI Advantage

GPU rack densities from AI compute workloads have outpaced air cooling's limits, forcing data center developers to choose between direct liquid cooling, immersion, and evaporative systems before a building is designed. This post covers the capabilities and site constraints of each approach, explains the water availability issue reshaping market selection, and shows where AI-assisted feasibility modeling changes the economics of the decision.

by Build Team April 11, 2026 5 min read

Data Center Cooling in 2026: Technology Options, Site Constraints, and the AI Advantage

As AI compute drives rack densities beyond what air can handle, cooling has moved from an operational detail to a development-level constraint.

For most of data center history, cooling was an operational concern, not a development one. Sites were selected for power and connectivity; cooling was engineered in afterward. That assumption broke around 2023, and it has not returned.

The cause is GPU density. A rack of Nvidia H100s draws 10 to 14 kW per server, meaning a standard 42U rack can hit 60 to 80 kW of heat load. The newer Blackwell B200 generation pushes higher still. Traditional air cooling maxes out at roughly 20 to 25 kW per rack under the best conditions. The gap between what AI compute demands and what air cooling delivers is now the defining constraint in data center engineering.

The downstream effects reach the developer before a single server is installed. Cooling technology choice drives site requirements for water, land area, and utility infrastructure. Getting it wrong at the design stage means either stranded capacity or a building that cannot serve the tenant mix it was built for.


The Cooling Technology Stack

Air Cooling: Still the Default, But Narrowing Fast

Conventional air cooling, computer room air conditioning (CRAC) and computer room air handlers (CRAH) with cold aisle/hot aisle containment, remains the baseline for general-purpose and enterprise workloads.

Where it works:

  • Rack densities below 20 kW

  • Enterprise and cloud storage workloads where GPU density is not the constraint

  • Markets where water costs or regulations make evaporative cooling expensive

Where it breaks down:

  • Any AI training or inference workload above 30 kW per rack

  • High-density deployments where floor area for airflow management is limited

  • Markets with high ambient temperatures where free cooling hours are limited

PUE for well-designed air-cooled facilities typically runs 1.25 to 1.45, depending on climate and operating efficiency.

Rear-Door Heat Exchangers

Rear-door heat exchangers (RDHx) are a transitional technology: water-cooled panels mounted to the back of standard racks that capture heat at the source before it enters the room airspace.

Capabilities:

  • Handles 20 to 40 kW per rack reliably

  • Can be retrofitted into existing raised-floor environments

  • Lower capital cost than full liquid cooling infrastructure

Limitations:

  • Not sufficient for current-generation GPU clusters at full density

  • Requires chilled water infrastructure in the room

  • Maintenance complexity increases compared to air-only environments

Direct Liquid Cooling

Direct liquid cooling (DLC) routes chilled water or other coolants directly to server components via manifolds and cold plates mounted to CPUs and GPUs. Heat is transferred to the coolant at the chip level and carried out of the rack entirely.

Capabilities:

  • Handles 40 to 100+ kW per rack

  • PUE as low as 1.10 to 1.15 with high-efficiency cooling plants

  • Required by most hyperscalers for AI training clusters as of 2025

Site implications:

  • Requires cooling distribution units (CDUs) at the row or room level

  • Needs higher-capacity chilled water plant infrastructure

  • Increases mechanical complexity and specialized maintenance requirements

Most new hyperscale builds announced in 2025 and 2026 specify DLC-ready infrastructure as a base requirement. Developers who are not designing for this are building legacy product.

Immersion Cooling

Immersion cooling submerges servers directly in dielectric fluid, either single-phase (liquid that remains liquid) or two-phase (liquid that boils and recondenses). It is the highest-density cooling approach currently in production deployment.

Capabilities:

  • Handles 100 kW per rack and above

  • PUE of 1.03 to 1.05 in optimized deployments

  • Significant reduction in water use compared to evaporative cooling

Limitations:

  • High capital cost: immersion tank infrastructure adds $1 to $2 million per MW compared to DLC

  • Limited vendor ecosystem for maintenance and fluid management

  • Not all server hardware is validated for immersion deployment (this is improving but still a constraint)

  • Primarily viable for dedicated AI training clusters with long operational stability

Evaporative and Hybrid Cooling

Many large facilities use evaporative cooling, wet cooling towers, as a heat rejection mechanism, either in combination with mechanical refrigeration or in climate-appropriate direct or indirect evaporative systems.

The water constraint:
A 100 MW data center using evaporative cooling can consume 200 to 400 million gallons of water annually, roughly the annual residential water use of a city of 2,000 to 4,000 homes. This is now a live regulatory issue.

Phoenix, in Maricopa County, has imposed water use restrictions on new data center developments. Singapore's national moratorium on new data centers from 2019 to 2022 was partly driven by water consumption concerns. Several California water districts have flagged data center expansion as a competing demand on limited groundwater.

Sites in markets with water stress, much of the Southwest, parts of Texas, and increasingly the Southeast, need a credible cooling water plan before they clear the site screening stage.


Site Constraints by Cooling Type

Cooling Approach Water Demand Land Premium CapEx Premium Viable Rack Density
Air cooling Minimal None Baseline Up to 20 kW
Rear-door heat exchangers Low None Low Up to 40 kW
Direct liquid cooling Moderate Low Moderate Up to 100 kW
Immersion cooling Low Moderate High 100 kW+
Evaporative (wet cooling) High Moderate Low-moderate Variable

The right cooling approach depends on the tenant mix, the market's water position, and the target power density. There is no universal answer in 2026, which is exactly why modeling it early matters.


Where AI Fits in Cooling Analysis

Cooling decisions made at the feasibility stage are difficult to reverse. A building designed for 20 kW per rack average density cannot easily be converted to serve 80 kW AI compute clusters without major mechanical plant replacement.

AI-assisted feasibility modeling helps developers make these decisions with better data:

Thermal load modeling: AI can simulate heat loads across different rack density assumptions and cooling configurations, identifying the crossover points where technology choices change the economics.

Water budget analysis: Given site water availability data and planned cooling technology, AI can model annual water consumption across operational scenarios and flag sites where water constraints would bind.

Life cycle cost comparison: Upfront capital cost is not the only variable. Energy efficiency (PUE), maintenance cost, water cost, and operational complexity all feed into a 10-year NPV model. AI can run these comparisons across cooling configurations at the early design stage, before engineers are engaged.

Climate sensitivity analysis: Free cooling hours vary by market. A facility in Seattle captures free cooling potential for 6,000 or more hours per year. The same facility in Phoenix may capture fewer than 1,000. AI can adjust PUE assumptions and energy cost models by location, improving the accuracy of feasibility underwriting.


The Developer Takeaway

Cooling technology selection is no longer an engineering afterthought. It is a development decision that shapes site requirements, capital structure, and tenant eligibility before the first design document is produced.

Developers building for the AI compute market in 2026 need to answer three questions before they select a cooling approach: What rack density does the target tenant base require? What water constraints does the site impose? What is the all-in cost difference between cooling configurations over a 10-year hold?

AI makes those questions answerable at feasibility speed. The alternative is designing to assumptions that may not match the market by the time the building delivers.