CoreWeave expanded its agreement with OpenAI to supply compute for training next-generation models in a deal valued up to $6.5 billion, the company said on September 25. The latest award follows contracts announced in March ($11.9B) and May ($4B), bringing OpenAI’s total 2025 commitments to CoreWeave to about $22.4B.
The announcement lands the same week OpenAI and NVIDIA unveiled a strategic partnership and letter of intent targeting at least 10 GW of NVIDIA systems for OpenAI’s future infrastructure—part of what NVIDIA called “the biggest AI infrastructure deployment in history.”
CoreWeave’s expanding role with OpenAI tracks the lab’s broader “Stargate” build-out toward ~7 GW of planned capacity with Oracle and SoftBank sites announced across multiple U.S. regions this week. Shares of CoreWeave rose following the news, reflecting investor expectations for sustained AI compute demand.
- New contract value: up to $6.5B; cumulative OpenAI contracts in 2025: ~$22.4B (March $11.9B; May $4B; September $6.5B).
- NVIDIA–OpenAI: LOI to deploy 10 GW of NVIDIA systems for OpenAI’s next-gen models; announcement dated Sept. 22, 2025.
- CoreWeave–NVIDIA supply: $6.3B initial order; NVIDIA agreed to purchase unsold CoreWeave capacity through April 2032.
- Infrastructure footprint: 28 data centers online by end-2024; 10 more planned in 2025; UK now a focus with a new £1.5B ($1.9B) commitment.
- Platforms: first cloud to GA NVIDIA GB200 NVL72 (Feb. 4) and first to deploy GB300 NVL72 (July 3); GB300 NVL72 instances launched Aug. 19.
- Networking & ops: NVIDIA Quantum-2 InfiniBand for GPU fabric; Spectrum Ethernet/Cumulus for storage/management; liquid cooling for high density.
- Expansion projects: plans include large European builds and a Pennsylvania site targeting 100–300 MW (134–402 MVA).
- M&A/platform moves: acquisitions of Weights & Biases (May 5) and OpenPipe (Sept. 3); launch of CoreWeave Ventures this month.
“We are proud to expand our relationship with OpenAI… This milestone affirms the trust that world-leading innovators have in CoreWeave’s ability to power the most demanding inference and training workloads at an unmatched pace,” said Michael Intrator, CoreWeave co-founder and CEO.
🌐 Analysis: CoreWeave has positioned its cloud squarely around NVIDIA’s most advanced platforms with early GA of GB200 NVL72 and first deployments of GB300 NVL72, underpinned by Quantum-2 InfiniBand and liquid-cooled, high-density designs—an architecture optimized for multi-trillion-parameter training and latency-sensitive inference at scale. The company reported 28 sites online by end-2024 with 2025 additions underway, UK expansion of £1.5B, and large U.S. builds (e.g., Pennsylvania up to 300 MW), signaling a rapidly scaling, distributed footprint aligned to power and fiber availability.
CoreWeave’s deep alignment with NVIDIA is two-way: beyond preferred access to Blackwell systems, NVIDIA committed a $6.3B initial order with a capacity take-or-pay backstop, while OpenAI–NVIDIA’s 10-GW LOI underscores sustained demand that could flow to CoreWeave as an execution partner. Against competitors building similar “AI factories,” CoreWeave’s strategy adds vertical moves (Weights & Biases, OpenPipe) and a venture arm, potentially tightening integration from training stacks to ops, as OpenAI ramps multi-partner deployments under Stargate.
🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/







