Converge Digest

Google Pushes 1 MW AI Rack Power Architecture and Liquid Cooling Blueprint

At the OCP 2025 EMEA Summit, Google unveiled major infrastructure innovations to power the next wave of AI workloads, including a shift to +/-400 VDC power delivery capable of supporting up to 1 megawatt per IT rack and the open sourcing of its next-generation liquid cooling solution, Project Deschutes. These advances aim to support the explosive growth of AI compute demands, which are expected to surpass 500 kW per rack before the end of the decade. Google is collaborating with Meta and Microsoft under the Mt Diablo project to standardize this new high-voltage power architecture, leveraging the mature EV supply chain for scale and efficiency.

In parallel, Google announced plans to contribute its fifth-generation cooling distribution unit (CDU) to the Open Compute Project. Drawing on nearly a decade of deployment experience with TPU liquid cooling, the Deschutes CDU design enables extremely high availability—99.999% uptime across more than 2,000 TPU pods. The disaggregated cooling approach isolates rack and facility loops and uses cold plates, manifolds, and flexible hoses to manage thermal loads from chips now exceeding 1,000W. Together, these technologies represent the next frontier in AI infrastructure design, driving greater power density, thermal performance, and serviceability across hyperscale deployments.

“With the accelerating pace of AI hardware development, we must collectively quicken our pace to prepare data centers for what’s next… The most impactful innovations are still ahead.” — Madhusudan Iyengar & Amber Huffman, Principal Engineers, Google

Exit mobile version