High-Density · 9 min read
In-Row vs Perimeter CRAC for High-Density Racks
Once rack density crosses 10 kW per cabinet, perimeter CRAC efficiency drops sharply. In-row cooling, rear-door heat exchangers, and direct-to-chip become the practical alternatives. Here's the decision tree.
The 10 kW per rack problem
Perimeter CRAC (cooling units around the room edge, blowing air through a raised access floor or overhead duct) is the default architecture for traditional data halls. It works well up to about 8-10 kW per rack — beyond that, the air path from the CRAC to the rack intake becomes too long, recirculation between hot and cold aisles increases, and you can't deliver enough cold air to the rack without unrealistic floor pressures.
AI training, GPU rendering, and HPC have pushed rack densities to 30-100 kW. At those densities, perimeter CRAC simply can't do the job — you need cooling closer to the heat source.
The four high-density topologies
1. In-row cooling — narrow vertical cooling units installed within the rack row, alongside server racks. Uses chilled water (CW) or DX. Cooling capacity 10-50 kW per unit. Air path: ~30-50 cm from cooling unit to rack intake.
2. Rear-door heat exchangers (RDHx) — finned heat exchangers attached to the back door of each server rack. Hot exhaust air from the servers passes through the heat exchanger before exiting the rack — neutralising the heat before it enters the room. Capacity 20-80 kW per door.
3. Direct-to-chip liquid cooling (D2C) — cold plates attached directly to the CPU and GPU heat sinks, with coolant pumped through the chips. Removes heat at the chip level rather than via air. Capacity 50-150 kW per rack typical for AI/GPU workloads.
4. Immersion cooling — entire servers submerged in dielectric coolant. Single-phase or two-phase. Capacity unlimited (effectively). Requires major facility change but enables the highest densities.
In-row cooling: the workhorse for high-density air cooling
In-row cooling is the practical air-cooled answer for rack densities 10-50 kW. The cooling unit is in the same row as the racks it serves, so the air path is short.
Key characteristics:
- Capacity per unit: 10-50 kW (Vertiv Liebert CRV is the typical choice in this segment)
- Topology: DX or CW. CW is more common for larger deployments due to part-load efficiency.
- Width: typically 30-60 cm — narrower than a server rack, slotted between racks
- Airflow: front-to-back through the row, matching the cold-aisle / hot-aisle layout
- Hot-aisle containment: essentially mandatory for in-row cooling to work; without containment, the cold supply air recirculates and efficiency collapses
In-row cooling is the right answer for retrofits where you need to handle a few high-density rows within an otherwise low-density data hall. Install in-row units in the AI / GPU rows, leave perimeter cooling for the rest.
Rear-door heat exchangers: easiest retrofit
RDHx is the lowest-friction retrofit for taking an existing data hall and adding high-density capacity. The heat exchanger door bolts onto an existing rack with no facility-wide change. Connect chilled water and you're done.
Key characteristics:
- Capacity per door: 20-80 kW (depending on water temperature and flow rate)
- Topology: chilled water only (CW)
- Effect on room temperature: RDHx removes ~80-90% of rack heat at the door — meaning the surrounding room sees only the residual 10-20% from each high-density rack
- Mixing with conventional cooling: ideal — you can run a high-density row with RDHx while the rest of the data hall runs perimeter CRAC at much lower density
RDHx is increasingly common in data centres being retrofitted for AI / GPU workloads. The economics are compelling — far cheaper than D2C or immersion, and removes the high-density heat at the source.
Direct-to-chip: AI training's default
For 50-100 kW per rack and above (NVIDIA H100 / H200 racks, AMD MI300X), direct-to-chip is the prevailing architecture. Cold plates attached to CPUs and GPUs carry coolant through the chips themselves.
Key characteristics:
- Capacity per rack: 50-150 kW typical for AI / GPU workloads
- CDU (Coolant Distribution Unit): required to interface between facility chilled water and the per-rack cold-plate loops
- Coolant: typically water with corrosion inhibitor, or specialty dielectric fluids
- Air cooling still needed: for non-D2C components (memory, NICs, PSUs) — typically 20-30% of rack heat remains air-cooled
- Facility complexity: higher than RDHx — requires per-rack manifolds, leak detection, and CDU integration with chiller plant
D2C is the prevailing architecture for hyperscale AI infrastructure. NVIDIA's reference designs assume D2C above ~30 kW per rack.
Immersion cooling: niche but growing
Immersion cooling submerges entire servers in a dielectric fluid (single-phase oil-based, or two-phase fluorocarbon). The fluid cools the servers directly with no air involvement.
Key characteristics:
- Capacity: effectively unlimited (heat removal limited only by external heat exchanger)
- PUE: typically 1.05-1.10 — among the best achievable
- Server form factor: must be immersion-compatible (some server vendors offer immersion-rated SKUs)
- Facility change: substantial — replacement of racking infrastructure with immersion tanks
- Service: servers come out of the tank for service; specialist process
Immersion is best suited to greenfield builds and very-high-density specialty deployments. Adoption in Australian commercial data centres is limited; the major use cases are research / HPC and some hyperscale builds.
Decision matrix
< 10 kW per rack: perimeter CRAC + hot-aisle containment. Standard architecture, well-understood.
10-30 kW per rack: perimeter CRAC + hot-aisle containment, OR in-row cooling for the high-density rows.
30-50 kW per rack: in-row cooling, OR rear-door heat exchangers. RDHx is the easier retrofit.
50-100 kW per rack: rear-door heat exchangers, OR direct-to-chip. D2C is increasingly the default for AI.
100+ kW per rack: direct-to-chip + rear-door for residual air cooling, OR immersion. Direct-to-chip dominates current AI deployments.
When to call us
We consult on high-density cooling strategy — including air-vs-liquid crossover analysis, retrofit feasibility, CDU specification, and chiller plant integration. For AI / GPU buildouts in Australia, contact us for a workload-specific assessment.
[Request a Quote](/contact#quick-quote).
References
- ASHRAE TC 9.9 — Thermal Guidelines for Data Processing Environments
- Open Compute Project — Liquid Cooling Sub-Project specifications
- NVIDIA reference designs (GB200 NVL72, HGX H100 platforms)
- Field experience from Australian high-density data hall deployments