Microsoft has activated a new class of AI datacenter in Atlanta — the second site in its “Fairwater” family — that operates as part of a connected network of AI supercomputing facilities stretching across the United States. Together with the recently announced Fairwater facility in Mount Pleasant, Wisconsin, the Atlanta site forms the foundation of what Microsoft calls its first AI superfactory — a unified, geographically distributed compute fabric engineered for frontier-scale model training.
Each Fairwater site is optimized for high-density GPU clusters and inter-site data synchronization at near–light speed. The new Atlanta datacenter integrates NVIDIA GB200 NVL72 rack-scale systems housing Blackwell GPUs, advanced AI-specific network protocols, and a liquid cooling system that nearly eliminates water use. Microsoft has built a dedicated AI Wide Area Network (AI WAN) — currently spanning over 120,000 miles (≈193,000 km) of owned and repurposed fiber — to interconnect these supercomputing zones, providing congestion-free data transport for distributed AI workloads.

Fairwater’s architecture is a departure from traditional cloud datacenters, which host millions of small, unrelated workloads. Instead, these facilities run one job at hyperscale — training AI models that require synchronized updates across hundreds of thousands of GPUs. Every rack, cluster, and region acts as part of a coherent system that shares gradients, checkpoints, and training data in real time.
Fairwater AI Datacenter Architecture
| Component | Technical Details | Purpose / Advantage |
| Compute Fabric | NVIDIA GB200 NVL72 rack systems with 72 interconnected Blackwell GPUs per rack; supports scaling to hundreds of thousands of GPUs | High-density, low-latency AI training clusters |
| CPU Integration | Millions of x86 and ARM cores for orchestration, scheduling, and non-AI compute tasks | Operational flexibility and workload balancing |
| Networking (Intra-site) | Intelligent GPU interconnect with advanced switching and optical networking optimized for ultra-low latency | Maintains synchronization across GPU clusters |
| Networking (Inter-site) | Dedicated AI WAN over 120,000 miles of private fiber; uses optimized protocols to minimize congestion | Enables multi-region distributed training at near-light speed |
| Storage | Exabyte-scale distributed storage; multi-tier NVMe architecture for checkpointing and data pipelines | Sustains massive training datasets and high I/O throughput |
| Datacenter Layout | Two-story building design; dense GPU stacking with vertical cabling and dual-loop coolant routing | Reduces latency and improves cooling efficiency |
| Cooling System | Closed-loop liquid cooling; external chillers circulate coolant with minimal water replacement | Maintains thermal stability with near-zero water use |
| Power Delivery | Modular high-density power blocks; likely >100 MW per site; redundant substations and grid interconnects | Ensures reliability under extreme load |
| Software Layer | Custom orchestration and scheduling for synchronous training; real-time model parameter exchange | Keeps GPUs fully utilized and synchronized |
| Sustainability | Nearly zero-water cooling and optimized electrical efficiency per GPU | Reduces environmental footprint under high compute density |
“These sites are designed to function as one virtual supercomputer,” said Alistair Speirs, Microsoft’s general manager for Azure infrastructure. “A traditional datacenter runs millions of small workloads. An AI superfactory runs one enormous workload across millions of processors — all synchronized through purpose-built networking.”
The Fairwater network allows geographically separated datacenters to train AI models with hundreds of trillions of parameters, with each GPU performing local computation before synchronizing gradients and weights with all others. Latency reduction, both within and across sites, is central to maintaining efficiency: even microsecond-level delays can stall the entire training job.
🌐 Analysis
The Wisconsin Fairwater project, first detailed in early 2025, set the design blueprint for Microsoft’s AI superfactory architecture. Located in Mount Pleasant, it represents one of Microsoft’s largest U.S. infrastructure investments, reportedly exceeding $3 billion for initial phases. The site’s design incorporates renewable-heavy power sourcing from the regional grid, a 120 MW–plus electrical envelope, and fiber trunk connectivity to Chicago and Minneapolis — now extended through Microsoft’s AI WAN to the Atlanta region.
Together, Wisconsin and Atlanta define the first operational segment of Microsoft’s distributed AI backbone — functionally merging compute, power, and data transport into a single AI-scale fabric. By geographically distributing compute resources, Microsoft mitigates local energy constraints and improves resilience while maintaining single-job coherence. This approach also reflects a shift from monolithic AI “megacampus” design toward federated AI compute regions, each optimized for power availability and latency thresholds.
As more Fairwater sites come online, likely in the Midwest, Southwest, and Pacific regions, Microsoft’s distributed AI WAN will form one of the world’s most powerful private networks. The Wisconsin facility anchors this network in the heart of the U.S. grid, while Atlanta provides regional balance, renewable integration, and proximity to the Southeast’s fiber routes and new power corridors.
🌐 We’re tracking the latest developments in AI infrastructure and data center design. Follow our ongoing coverage at: https://convergedigest.com/category/ai-data-center/
🌐 We’re launching the “Data Center Networking for AI” series on NextGenInfra.io and inviting companies building real solutions—silicon, optics, fabrics, switches, software, orchestration—to share their views on video and in our expert report. To get involved, send a note to jcarroll@convergedigest.com or info@nextgeninfra.io.







