Skip to main content

Architecture

CRAC Topology Explained: DX, CDW, CHW, In-Row & RDHx

A working translation of CRAC topology choices — DX, CDW, CHW, in-row, RDHx — into the load profiles, plant footprints and uptime targets that drive the decision.

Updated 3 May 2026·12 min read·10 chapters

Summary

CRAC topology choice is the most consequential design decision in a data centre cooling system. The five common topologies — direct-expansion (DX), condenser water (CDW), chilled water (CHW), in-row and rear-door heat exchangers (RDHx) — each suit a different combination of load size, rack density, plant infrastructure and uptime class.

This guide walks through each topology in detail, explains where each one wins and where each one breaks, and gives you a working framework to pick the right answer for your site. It is the reference our engineers use on commissioned design work, grounded in ASHRAE TC 9.9 and AS/NZS 1668 / 5149.

1. The five topologies at a glance

CRAC (Computer Room Air Conditioning) topologies differ in how they move heat from the IT load to outside the building. The five topologies you will encounter in Australian data centres are direct-expansion (DX), condenser water (CDW), chilled water (CHW), in-row, and rear-door heat exchangers (RDHx). Each can be combined with the others; large data halls often run a mixed fleet.

TopologyHeat carrierBest load sizeRack density limit
DXRefrigerant (R-32, R-454B)< 50 kW5-8 kW per rack
CDWCondenser water + tower50-300 kW8-15 kW per rack
CHWChilled water + chiller> 100 kW8-15 kW per rack (perimeter)
In-rowCDW or CHW close-coupledDensity-driven15-30 kW per rack
RDHxPassive water at rack rearDensity top-up15-30 kW per rack
Topology summary

2. DX (direct expansion) — the default for small sites

In a DX system, refrigerant runs directly from the indoor cooling unit to an outdoor condenser. The compressor sits in the indoor unit (or sometimes outdoors). Refrigerant absorbs heat from the IT room, transports it through the indoor coil, and rejects it at the outdoor condenser.

DX is the most cost-effective topology for sites under 50 kW total IT load. There is no chilled-water plant to build, no cooling tower, no water treatment program. You install the indoor unit, run refrigerant pipework to the outdoor condenser, and commission. Capex is low and lead times are short.

The major Australian DX product families are Vertiv Liebert PEX4 and PDX, Schneider Uniflair LE and TD, and Stulz CyberAir. Capacity ranges from ~8 kW for small comms-room units up to ~150 kW for large perimeter floor-mounts. Above 150 kW per unit, refrigerant pipework becomes impractical and CDW or CHW become more efficient.

Note

Australia is moving from R-410A to lower-GWP refrigerants under the HFC phase-down. New DX installations should specify R-32 or R-454B. Older R-410A systems remain serviceable but new equipment will be R-32 / R-454B from 2025 onwards.

DX limits: rack density above ~8 kW, total load above ~50 kW, or sites with no outdoor space for condensers. Beyond those, the topology starts to break — refrigerant runs become long, compressor maintenance is more frequent, and the lifecycle cost compared to CDW or CHW becomes unfavourable.

3. CDW (condenser water) — middle scale

CDW topologies use a closed condenser-water loop to carry heat from the indoor cooling units to a cooling tower. The indoor units have integrated compressors (like DX) but reject heat to water rather than directly to outdoor air via refrigerant. The cooling tower handles the final heat rejection.

CDW is more efficient than DX at scale because cooling towers reject heat at the wet-bulb temperature, which is below dry-bulb temperature for most of the year. In Brisbane the difference can be 5-8°C; in Sydney 6-10°C. That delta translates directly to compressor energy savings.

CDW suits 50-300 kW total IT load, especially when paired with N+1 redundancy across multiple indoor units sharing a single tower. Major products include Vertiv Liebert PEX-W, Stulz CyberAir CW, and Climaveneta CRAC-W ranges.

CDW comes with operational overhead. The cooling tower needs water-treatment to prevent Legionella under AS/NZS 3666. Water consumption is real — a 100 kW CDW system can use 200-400 L/hour of make-up water in summer. And the plant room needs space for primary/secondary pumps, expansion tanks and chemical-dosing equipment.

4. CHW (chilled water) — large data halls

CHW topologies use a central chilled-water plant to produce chilled water (typically 7°C supply / 12°C return), which is distributed via insulated piping to perimeter or in-row CRAC units. The CRAC units have no compressors of their own — they are essentially fan-coil units.

At scale (>100 kW IT load) CHW is the most energy-efficient topology because: (a) one large central chiller is more efficient than many small distributed compressors, (b) free-cooling economisers can produce chilled water from outdoor air during cool weather without running the chiller at all, (c) variable-flow pumping matches plant load to IT load smoothly.

For Australian sites, free-cooling hours per year vary by location: Hobart and Canberra have ~3,000-4,000 hours of free-cooling potential; Melbourne and Sydney ~2,000-2,500; Brisbane ~600-1,000; Darwin and Cairns negligible. These hours translate directly to chiller off-time and PUE improvement.

Major CHW products include Vertiv Liebert PCW, Schneider Uniflair AGRD, Stulz CyberAir CW, and the broader chilled-water portfolio from Daikin Applied and Climaveneta. CRAC capacity is essentially unlimited at this scale — multi-MW chiller plants are routine for hyperscale data halls.

5. In-row cooling — high density

In-row cooling places the cooling unit between IT racks rather than at the room perimeter. The unit takes hot exhaust air directly from the hot aisle, removes the heat, and delivers cold air directly to the cold aisle — without the air mixing through the rest of the room.

In-row is mandatory above 10-15 kW per rack. At those densities, perimeter CRAC cannot move enough air across the room to keep the rack face supplied — the airflow path is too long and the rack-face temperature rises beyond ASHRAE's allowable envelope.

In-row units run on either CDW or CHW (rarely DX). Chilled water is preferred at scale because the variable-flow pumping handles the spiky load profile of high-density racks better than DX compressor staging.

The architectural pattern with in-row is hot-aisle / cold-aisle containment. Curtains or doors enclose the hot aisle, preventing hot exhaust air from re-circulating into the cold aisle. Containment is mandatory above ~12 kW per rack — without it, the in-row efficiency advantage collapses.

Practical tip

In-row runs at higher water temperatures than perimeter (typically 18°C supply / 24°C return), which gives more free-cooling hours and better chiller efficiency. ASHRAE TC 9.9 W4/W5 envelopes were specifically defined to enable warm-water in-row designs.

6. RDHx (rear-door heat exchangers)

Rear-door heat exchangers are passive coils that bolt to the back of an IT rack. They do not actively move air — they cool the air the rack itself exhausts as it passes through the coil before entering the room. RDHx are typically chilled-water or condenser-water cooled, with no compressor or fan in the coil itself.

RDHx solves the high-density problem at the rack level. A 25 kW rack with RDHx exhausts air at room-neutral temperature — the heat goes into the cooling water loop, not the room. This means the rest of the room's cooling infrastructure does not need to handle the high-density rack's heat at all.

Major products: Vertiv Liebert CoolLoop, Schneider Uniflair Active Rear Door (AGRD), and various third-party RDHx from Coolcentric and others. Capacity is typically 25-50 kW per rack.

RDHx adds rack depth (typically 100-150mm at the back) and requires a chilled-water or condenser-water connection at every rack. For greenfield builds with high-density expectations from day one, RDHx is the cleanest answer. For brownfield retrofits, RDHx is harder than in-row because of the water-distribution overhead.

7. Liquid cooling — beyond CRAC

Above 30 kW per rack, even RDHx struggles. The current generation of GPUs (NVIDIA H100, B200, GB200) routinely pulls 10-15 kW per server, and 8-server racks easily hit 80-120 kW. At those densities, air cooling cannot remove the heat at any reasonable airflow.

Direct-to-chip liquid cooling runs water or dielectric fluid through cold plates mounted directly on CPUs and GPUs. The cold plates absorb heat at the source and reject it via a Cooling Distribution Unit (CDU) into a building chilled-water loop or external dry cooler.

Australian deployments at this density are still rare but growing. The major hyperscalers, defence research, and AI startups are the early adopters. CRAC Services partners with specialist liquid-cooling consultancies for these deployments — it is a different engineering discipline to traditional CRAC and warrants specialist sign-off.

Immersion cooling (submerging entire servers in dielectric fluid) is the next step beyond direct-to-chip. It is even more density-tolerant but requires specialist server hardware (immersion-rated PSUs, sealed connectors). Niche but real for the highest-density AI training environments.

8. Mixing topologies

Real data halls rarely use a single topology. A typical mixed deployment looks like: perimeter CHW for the bulk of standard-density racks (~5-8 kW each); in-row CHW for a designated high-density zone (10-15 kW per rack); RDHx on individual racks running GPUs (15-25 kW per rack). The chiller plant feeds all three at different supply-water temperatures.

This mixing is intentional. It lets you size the bulk cooling cost-effectively for standard density, then add density only where it is needed without rebuilding the whole hall. Hyperscalers do this routinely — a single hall might have low-density storage rows alongside in-row GPU pods.

The design discipline is: pick the lowest-cost topology that meets each zone's density requirement, and design the chilled-water plant to support all of them simultaneously. The topology selector tool walks through this decision for your specific load profile.

9. Topology and redundancy

Each topology has a different concurrent-maintainability profile:

  • DX: redundancy is achieved with multiple parallel indoor/outdoor units. Concurrent maintainability requires N+1 across both indoor and outdoor.
  • CDW: redundancy with multiple indoor units sharing a tower is straightforward. Concurrent maintainability of the tower itself requires dual towers, or a single tower with sectionalised cells.
  • CHW: redundancy is at the chiller plant level. Tier III requires concurrently maintainable chilled-water loops — typically dual-loop with valve isolation.
  • In-row: per-row redundancy requires N+1 in-row units per containment zone. The water distribution to each unit must also be concurrently maintainable.
  • RDHx: passive at the rack — failure modes are different (coil leak, valve failure). Per-rack redundancy is hard; usually paired with perimeter or in-row for backup capacity.

Tier IV (fully fault-tolerant) requires 2N at the topology level, which is expensive in CHW (two chiller plants) and impractical in DX (twice the indoor and outdoor units). Tier IV CRAC builds are almost always CHW with 2N chiller plants.

10. Specification checklist

Use this checklist when receiving a CRAC topology proposal:

  1. Topology stated explicitly and matched to the load profile
  2. kW capacity per unit, total capacity, redundancy class
  3. For DX: refrigerant type (R-32, R-454B), GWP, phase-down compliance
  4. For CDW: cooling tower water treatment program, Legionella mitigation under AS/NZS 3666
  5. For CHW: chiller efficiency (kW/kWr), free-cooling economiser hours expected, supply/return water temperatures
  6. For in-row: containment design and air-management strategy
  7. For RDHx: per-rack water-distribution plan and leak-detection
  8. AS/NZS 5149 compliance documentation (refrigerant safety)
  9. AS/NZS 1668.2 ventilation compliance
  10. ASHRAE TC 9.9 envelope target (A1, A2, W3, W4)
  11. Maintenance plan and access requirements