1. Why air cooling stops working above 30 kW per rack
Air cooling capacity is bounded by airflow. To remove 30 kW of heat from a single rack at a 12°C delta-T (typical cold-aisle to hot-aisle), you need roughly 7,500 CFM (3.5 m³/s) of air through that rack. At that velocity, the air starts to bypass server fans, hot-spots emerge in the middle of the rack, and noise levels rise to OSHA-relevant ranges.
Above 30 kW per rack, even aggressive in-row cooling and full hot-aisle containment cannot move enough air to keep the rack-face supply temperature within ASHRAE TC 9.9 limits. Server fans throttle, performance drops, and equipment lifespan shortens.
This is not a future problem. NVIDIA H100 servers (PCIe form-factor, 8x H100 per server) pull ~10-12 kW per server at full load. SXM5 servers go higher. An NVL72 rack — 36 dual-GPU compute trays plus switches — is rated at 130 kW. These are shipping in volume in 2026.
2. Rear-door heat exchangers (RDHx)
RDHx are passive water-cooled coils that bolt to the back of a rack. Server fans push hot exhaust air through the coil, where it transfers heat to a chilled-water (or condenser-water) loop. The air leaves the rack at room-neutral or even sub-ambient temperatures.
RDHx capacity ranges from 25 to 75 kW per rack depending on water temperature, flow rate and coil depth. Lower water temperatures (10-14°C) can support higher capacities; warmer water (18-24°C) reduces capacity but improves chiller efficiency.
RDHx is the lowest-disruption upgrade for an existing data hall facing GPU densification. Standard 19" racks become high-density-capable by bolting on the rear door, plumbing a water connection, and connecting to the existing chilled-water plant. No server-side changes required.
Capex for RDHx is roughly A$15-30k per rack including the coil, water-distribution piping, and isolation valves. For 30-50 racks, total capex is in the A$500k-1.5M range — significantly cheaper than a full direct-to-chip retrofit.
Note
RDHx requires per-rack water connections. Water-distribution piping running through the room adds complexity. Plan for leak detection at every drop and accept that adding a water connection at every rack is a real engineering project, not a plug-and-play upgrade.
3. In-row chilled water — high-density variant
In-row chilled-water units sized for 60-80 kW per row can support racks up to ~25-30 kW each within a fully-contained pod. The cooling is close-coupled (units between racks, not at the room perimeter), so airflow paths are short and air management is tight.
For racks above 30 kW, in-row alone struggles. The unit cannot move enough air through a single rack at that density. Some operators address this with vertical exhaust ducts that channel hot exhaust above the rack rather than into the hot aisle — but this is an unusual and expensive design.
In-row makes sense as a complement to RDHx in mixed-density halls. The in-row units handle the average load of standard racks (5-15 kW), while RDHx handles the high-density GPU racks (30+ kW). Both share the same chilled-water plant.
4. Direct-to-chip liquid cooling
Direct-to-chip cooling (D2C) runs water or dielectric fluid through cold plates mounted directly on CPUs and GPUs. The heat is captured at the source — the silicon — and transferred via a Cooling Distribution Unit (CDU) to a building chilled-water loop, dry cooler, or external cooling tower.
D2C is the only practical option above ~50 kW per rack. The cold plate sits within millimetres of the silicon, removes heat at much higher temperatures than air can (typical cold-plate inlet 30-45°C), and supports rack densities above 100 kW.
D2C requires server-side modifications. The server must come from the manufacturer with cold-plate-equipped CPUs/GPUs, manifold connections, and quick-disconnect fittings. NVIDIA's reference designs for B200 / GB200 are D2C-native; H100 SXM is also typically D2C.
Capex for D2C is significant. Beyond the per-server cost premium (typically A$5-15k per server vs air-cooled equivalents), the building infrastructure includes: CDU(s) for the entire pod, primary/secondary water loops, manifold distribution at every rack, leak-detection at every drop, secondary containment, and specialist commissioning.
5. Immersion cooling
Immersion cooling submerges entire servers in a dielectric fluid (typically a mineral oil or fluorocarbon). The fluid absorbs heat from the silicon and other components, circulates through a heat exchanger, and rejects heat to a building water loop or external cooler.
Immersion offers the highest density of any topology — 100+ kW per tank is routine, with some specialist designs reaching 250 kW. It also offers near-perfect heat capture (no air-cooling losses) and tolerates very warm-water heat rejection (40-50°C return).
The trade-off is operational complexity. Servers must be specified for immersion (sealed connectors, no spinning hard disks, conformal coating). Server maintenance involves draining, lifting, and re-immersing — significantly different from rack-and-stack workflows. Floor loading is high (immersion tanks are heavy).
For Australian builds, immersion is appropriate for AI training pods, defence intelligence, and specialist HPC. Most enterprise environments stay with D2C as the higher-density option short of immersion.
6. The hybrid approach — D2C + RDHx
Modern AI server architectures generate ~70-80% of their heat at the GPU/CPU silicon and ~20-30% at the rest of the server (PSUs, memory, networking). D2C captures the silicon heat directly, but the remaining heat still needs to be removed from the rack.
The hybrid pattern: D2C for the silicon, RDHx for the residual rack-level heat. The chilled-water plant serves both: cold water (~12°C) to the RDHx coils, warm water (~30-45°C) to the D2C cold plates. This separates the two heat loads at the optimum operating temperature for each.
NVIDIA reference architectures for high-density GB200 deployments include this hybrid as the default. We have designed Australian deployments using Vertiv CoolLoop RDHx + Vertiv Liebert XDU CDUs for the D2C side, all on a common warm-water-capable chiller plant.
7. Capex bands by topology
Approximate Australian capex bands for a 50-rack, 5 MW GPU pod build (mid-2026 pricing):
| Topology | Capex band (AUD) | Density supported per rack |
|---|---|---|
| Perimeter CHW + containment | $3-4M | 8-12 kW |
| In-row CHW + containment | $5-6M | 15-25 kW |
| RDHx + in-row CHW hybrid | $7-9M | 30-50 kW |
| Direct-to-chip + RDHx hybrid | $10-13M | 50-100 kW |
| Immersion (specialist build) | $13-18M | 100-250 kW |
These are infrastructure-only numbers — chiller plant, distribution piping, CDUs, CRAC units, controls. Server capex (the GPU servers themselves) is separate and orders of magnitude larger.
8. Operational implications
Each topology has operational implications beyond the capex.
- RDHx: leak detection at every rack. Quarterly water-loop inspection. Coil cleaning every 2-3 years.
- In-row CHW: standard CRAC maintenance schedule. Containment doors and curtains need quarterly check.
- Direct-to-chip: specialist commissioning. CDU maintenance is fluid-loop work, not refrigerant. Fluid quality testing every 6 months.
- Immersion: dielectric fluid is consumable (replenishment over time). Specialist staff training for tank maintenance. Fluid disposal at end-of-life.
For Australian operators, the labour pool for D2C and immersion is still small — ARC-licensed refrigeration technicians outnumber liquid-cooling specialists by orders of magnitude. Specialist consultancy and vendor support is currently essential for these builds.
9. Heat reuse opportunities
High-temperature liquid cooling (D2C at 40-45°C, immersion at 45-50°C) opens up heat-reuse opportunities that air cooling cannot. The exhaust water is hot enough to feed building heating systems, domestic hot water, swimming pool heating, or district heating.
In Northern Hemisphere markets (Sweden, Germany, Netherlands), heat reuse is a major selling point for hyperscale builds — local district heating networks pay the data centre for the rejected heat. Australian climates are less favourable for traditional district heating, but specific opportunities exist (hospital domestic hot water, university campus heating in southern states).
Heat reuse is rarely a primary justification for D2C / immersion in Australia, but it is a meaningful tertiary benefit and worth modelling for sites with adjacent heat consumers.
10. Specification checklist
When specifying high-density cooling, document:
- Per-rack max load and average load (kW)
- Server form-factor and air/liquid cooling readiness
- Topology selected and rationale
- Cooling capacity per rack and per pod
- Water temperatures (supply/return) at every loop
- Redundancy class on cooling plant and water distribution
- Leak detection at every drop
- Specialist commissioning scope
- Operational training plan for the in-house team
- End-of-life disposal pathway for fluids and specialist equipment