Amazon Web Services (AWS) and OpenAI have signed a seven-year, $38 billion infrastructure partnership that will see OpenAI run and scale its advanced generative AI workloads on AWS. The deal gives OpenAI immediate access to AWS’s EC2 UltraServers—featuring hundreds of thousands of NVIDIA GB200 and GB300 GPUs—and the ability to scale to tens of millions of CPUs for training and inference. The full deployment is expected to be completed before the end of 2026, with expansion into 2027 and beyond.
AWS’s UltraServer clusters combine dense GPU and CPU resources on the same low-latency network fabric, optimized for large-scale model training and real-time inference. The design aims to maximize performance efficiency for OpenAI’s agentic AI workloads, including ChatGPT and future frontier models. AWS, which already operates clusters exceeding 500,000 chips, brings deep operational expertise in running secure and reliable hyperscale AI infrastructure.
Both companies framed the partnership as a step toward accelerating global access to advanced AI. The collaboration builds on their prior engagement through Amazon Bedrock, where OpenAI’s open-weight foundation models are already available to AWS customers for applications such as coding, scientific research, and enterprise automation.
• Multi-year agreement valued at $38 billion between OpenAI and AWS
• Deployment includes EC2 UltraServers powered by NVIDIA GB200 and GB300 GPUs
• AWS to scale infrastructure to tens of millions of CPUs by 2026
• Partnership strengthens AWS’s position in large-scale AI infrastructure
• Builds on prior collaboration via Amazon Bedrock and OpenAI model availability
“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, CEO of OpenAI. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
🌐 Analysis: This partnership signals OpenAI’s diversification beyond Microsoft Azure, where most of its workloads currently reside, and marks one of AWS’s largest AI infrastructure commitments to date. It also underscores the deepening competition among hyperscalers—AWS, Microsoft, and Google Cloud—to host next-generation AI training workloads and secure multi-billion-dollar, long-term compute contracts.
🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/







