• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » CoreWeave Becomes First Cloud Provider to Offer NVIDIA GB200 NVL72 AI Instances

CoreWeave Becomes First Cloud Provider to Offer NVIDIA GB200 NVL72 AI Instances

February 9, 2025
in Data Centers
A A

CoreWeave has announced the general availability of NVIDIA GB200 NVL72-based instances, making it the first cloud provider to offer the latest NVIDIA Blackwell-powered infrastructure. Built on the NVIDIA GB200 Grace Blackwell Superchip, these instances deliver up to 30 times faster AI inference performance, with optimized energy efficiency and cost savings. Featuring rack-scale NVIDIA NVLink connectivity and NVIDIA Quantum-2 InfiniBand networking, CoreWeave’s GB200 NVL72-powered clusters scale up to 110,000 GPUs, enabling enterprises to train, deploy, and scale complex AI models with unprecedented speed and efficiency.

CoreWeave’s cloud services are designed to maximize the potential of NVIDIA Blackwell architecture, integrating managed solutions like CoreWeave Kubernetes Service and Slurm on Kubernetes (SUNK) for optimized workload orchestration. The company’s Observability Platform provides real-time monitoring of GPU performance, ensuring seamless AI operations. These capabilities make CoreWeave’s cloud an ideal platform for developing advanced AI reasoning models, AI agents, and real-time large language model (LLM) inference. The partnership with NVIDIA underscores a commitment to pushing the boundaries of AI infrastructure and accelerating next-generation computing.

  • First-to-Market: CoreWeave is the first cloud provider to offer NVIDIA GB200 NVL72 instances.
  • Performance Boost: Up to 30x faster LLM inference and 4x faster training compared to previous generations.
  • Energy Efficiency: 25x lower total cost of ownership and power consumption for real-time inference.
  • Scalability: Supports up to 110,000 GPUs with NVIDIA NVLink and Quantum-2 InfiniBand.
  • Managed Cloud Services: Includes Kubernetes-based workload management and real-time observability tools.
  • Enterprise AI Enablement: Integrates NVIDIA AI software stack for developing AI agents and reasoning models.
  • Partnerships: Collaboration with IBM and NVIDIA to deliver AI supercomputing capabilities.
  • Deployment: Available now in CoreWeave’s US-WEST-01 region for enterprise-scale AI applications.
Tags: CoreWeaveNvidia
ShareTweetShare
Previous Post

MIT Researchers Develop Energy-Efficient Photonic AI Processor 

Next Post

T-Mobile Launches Starlink for All U.S. Mobile Users, Including AT&T and VZ

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

OCP Expands AI Initiative with Contributions from NVIDIA and Meta
Semiconductors

Arm Extends Neoverse With NVIDIA NVLink Fusion

November 17, 2025
Cisco Brings Agentic AI Capabilities
AI Infrastructure

VAST Data Secures $1.17B Partnership with CoreWeave

November 6, 2025
Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud
AI Infrastructure

Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud

November 6, 2025
Forescout Unveils Real-Time Detection Tech for Non-Quantum-Safe Encryption
Quantum

NVQLink: NVIDIA’s Bridge to Quantum Supercomputing

November 1, 2025
NVIDIA Fuels Korea’s AI Factory Boom
AI Infrastructure

NVIDIA Fuels Korea’s AI Factory Boom

November 1, 2025
The Megawatt Shift: NVIDIA’s 800 VDC Strategy
Data Centers

The Megawatt Shift: NVIDIA’s 800 VDC Strategy

November 1, 2025
Next Post
T-Mobile Launches Starlink for All U.S. Mobile Users, Including AT&T and VZ

T-Mobile Launches Starlink for All U.S. Mobile Users, Including AT&T and VZ

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version