Microsoft unveiled Fairwater, its most advanced AI datacenter to date, located in Mt. Pleasant, Wisconsin. Spanning 315 acres and 1.2 million square feet (111,484 m²), the facility houses three massive buildings and operates as a single AI supercomputer interconnected by a flat networking architecture. The site is powered by hundreds of thousands of NVIDIA GB200 GPUs configured in tightly coupled racks, delivering 10X the performance of the world’s fastest supercomputer. The project required 46.6 miles (75 km) of foundation piles, 26.5 million pounds (12 million kg) of structural steel, 120 miles (193 km) of underground cable, and 72.6 miles (117 km) of mechanical piping.
The Fairwater datacenter is optimized for training and inference of trillion-parameter AI models. Each rack holds 72 NVIDIA Blackwell GPUs linked by NVLink and NVSwitch, creating pooled memory and bandwidth at terabytes-per-second scale. Racks are arranged in a two-story configuration to minimize latency, and the overall system functions as a unified accelerator cluster capable of processing 865,000 tokens per second. Complementing the compute infrastructure, storage systems span five football fields, with re-architected Azure Blob Storage sustaining over 2 million transactions per second per account.
Environmental considerations are central to the design. Fairwater uses a closed-loop liquid cooling system—the second largest water-cooled chiller plant globally—recycling water continuously without evaporation losses. Over 90% of workloads run on liquid-cooled systems, while the rest use air cooling with water only on peak-temperature days. Microsoft also announced parallel AI datacenter projects in Narvik, Norway, and Loughton, UK, built with partners nScale and Aker JV. These sites will feature NVIDIA’s next-generation GB300 GPUs and integrate into Microsoft’s AI WAN, linking more than 400 datacenters across 70 global regions.
• Fairwater covers 315 acres and 1.2 million square feet (111,484 m²) across three buildings
• Houses hundreds of thousands of NVIDIA GB200 GPUs with NVLink interconnects
• Delivers 10X the performance of today’s fastest supercomputer
• Storage spans five football fields with exabyte-scale Azure Blob infrastructure
• Closed-loop liquid cooling system eliminates operational water use
• New AI datacenters also planned in Norway and the UK with partners nScale and Aker JV
“We’re unleashing a new era of cloud-powered intelligence, secure, adaptive and ready for what’s next,” stated Scott Guthrie, Executive Vice President of Cloud + AI at Microsoft in a blog post today.
🌐 Analysis: Microsoft’s Fairwater datacenter demonstrates the shift toward purpose-built AI factories that tightly couple compute, storage, and networking at supercomputing scale. By integrating NVIDIA’s Blackwell GPU clusters with custom cooling and global AI WAN interconnects, Microsoft positions Azure as a leading platform for frontier AI training. Competitors including Amazon, Google, and Oracle are also scaling AI-optimized facilities, but Microsoft’s early adoption of GB200 and plans for GB300 integration signal an aggressive strategy to anchor OpenAI and enterprise AI workloads in its infrastructure.
🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/






