NVIDIA and Microsoft Expand AI Superfactory Collaboration With Blackwell, Spectrum-X and New Azure NCv6 VMs
NVIDIA is deepening its partnership with Microsoft as construction accelerates on the Fairwater AI superfactory — a two-site cluster spanning Wisconsin and Atlanta designed to interconnect hundreds of thousands of NVIDIA Blackwell GPUs. Microsoft is now deploying next-generation NVIDIA Spectrum-X Ethernet switches at Fairwater to support large-scale training for OpenAI, the Microsoft AI Superintelligence Team, Microsoft 365 Copilot and Microsoft Foundry workloads.
The companies are also bringing new NVIDIA integrations across Microsoft 365 Copilot and Azure, including the public preview of Azure NCv6 Series VMs powered by RTX PRO 6000 Blackwell Server Edition GPUs. These right-sized accelerators target multimodal agentic AI, digital twin workflows via NVIDIA Omniverse, industrial simulation and visual computing, with deployment options extending to Azure Local for sovereign and edge AI use cases.
At global scale, Microsoft is rolling out more than 100,000 Blackwell Ultra GPUs inside GB300 NVL72 systems for inference, complementing the training cluster at Fairwater. The collaboration also introduces new software optimizations that create a fungible AI fleet across Blackwell and Hopper architectures on Azure, driving continuous throughput gains for models such as MAI-1-preview, MAI-Voice-1 and MAI-Image-1. Microsoft reports that these co-engineered optimizations contributed to a more than 90% reduction in end-user GPT model pricing on Azure over the past two years.
NVIDIA and Microsoft extend the partnership deeper into SQL Server 2025, security, and robotics. New integrations bring Nemotron open models and NIM microservices directly into SQL Server for GPU-accelerated RAG workloads. The NeMo Agent Toolkit now connects with Microsoft Agent 365 to onboard enterprise AI agents across Outlook, Teams, Word and SharePoint. Joint cybersecurity research uses the Dynamo-Triton framework and TensorRT tools to achieve up to 160× faster adversarial detection than CPU methods. In physical AI, Azure hosts NVIDIA Omniverse libraries, Isaac robotics tools and standardized OpenUSD workflows used by partners including Synopsys, Sight Machine, SymphonyAI, Hexagon and Wandelbots.
• Microsoft is deploying next-generation NVIDIA Spectrum-X Ethernet switches at the Fairwater AI superfactory.
• Fairwater and Atlanta facilities integrate hundreds of thousands of Blackwell GPUs for large-scale AI training.
• Microsoft is rolling out more than 100,000 Blackwell Ultra GPUs in GB300 NVL72 systems globally for inference.
• Public preview of Azure NCv6 VMs powered by NVIDIA RTX PRO 6000 Blackwell GPUs is now available.
• NVIDIA–Microsoft software tuning achieves compounding gains across Blackwell and Hopper GPUs on Azure.
• Joint optimization contributed to a >90% drop in Azure GPT model pricing over two years.
• Nemotron and NIM microservices integrate directly with SQL Server 2025 for GPU-accelerated RAG.
• NeMo Agents now connect to Microsoft Agent 365 for AI agent onboarding across the Microsoft 365 ecosystem.
• NVIDIA and Microsoft are co-developing 160×-faster adversarial learning models for cybersecurity.
• Azure hosts NVIDIA Omniverse, Isaac Sim, Isaac Lab and OpenUSD workflows for digital twins and robotics.
“Our collaboration with NVIDIA is built on driving innovation across the entire system and full stack, from silicon to services,” said Nidhi Chappell, corporate vice president of product management at Microsoft. “By coupling Microsoft Azure’s unmatched data center scale with NVIDIA’s accelerated computing, we are maximizing AI data center performance and efficiency, which is of paramount importance for our customers leading the new AI era.”
🌐 Analysis
This expansion shows Microsoft standardizing across NVIDIA’s full AI stack — Spectrum-X networking, Blackwell GPUs, NVL72 racks, Nemotron models and NIM microservices — to optimize both large-scale training and global inference. It also highlights Microsoft’s ongoing multibillion-dollar GPU procurement program, which includes GB200 and Blackwell Ultra systems across Azure regions. Competing clouds such as AWS and Google Cloud are taking similar approaches with custom accelerators (Trainium2, TPUs v6p), but are likewise integrating NVIDIA systems at scale to meet enterprise demand for multimodal agentic AI.
🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/
🌐 We’re launching the “Data Center Networking for AI” series on NextGenInfra.io and inviting companies building real solutions—silicon, optics, fabrics, switches, software, orchestration—to share their views on video and in our expert report. To get involved, send a note to jcarroll@convergedigest.com or info@nextgeninfra.io






