Site icon Converge Digest

CoreWeave Becomes First Cloud Provider to Offer NVIDIA GB200 NVL72 AI Instances

CoreWeave has announced the general availability of NVIDIA GB200 NVL72-based instances, making it the first cloud provider to offer the latest NVIDIA Blackwell-powered infrastructure. Built on the NVIDIA GB200 Grace Blackwell Superchip, these instances deliver up to 30 times faster AI inference performance, with optimized energy efficiency and cost savings. Featuring rack-scale NVIDIA NVLink connectivity and NVIDIA Quantum-2 InfiniBand networking, CoreWeave’s GB200 NVL72-powered clusters scale up to 110,000 GPUs, enabling enterprises to train, deploy, and scale complex AI models with unprecedented speed and efficiency.

CoreWeave’s cloud services are designed to maximize the potential of NVIDIA Blackwell architecture, integrating managed solutions like CoreWeave Kubernetes Service and Slurm on Kubernetes (SUNK) for optimized workload orchestration. The company’s Observability Platform provides real-time monitoring of GPU performance, ensuring seamless AI operations. These capabilities make CoreWeave’s cloud an ideal platform for developing advanced AI reasoning models, AI agents, and real-time large language model (LLM) inference. The partnership with NVIDIA underscores a commitment to pushing the boundaries of AI infrastructure and accelerating next-generation computing.

Exit mobile version