CoreWeave has announced that it is the first cloud provider to bring NVIDIA H200 Tensor Core GPUs to market. These GPUs are designed to enhance the performance of generative AI applications by offering up to 1.9 times higher inference performance than previous models. CoreWeave’s H200 instances are integrated with Intel’s fifth-generation Xeon CPUs and advanced NVIDIA Quantum-2 InfiniBand networking, enabling customers to train complex AI models more efficiently and at a lower cost.
CoreWeave’s Mission Control platform ensures high system reliability and resiliency, managing AI infrastructure with automation and extensive monitoring tools. This allows customers to reduce downtime and accelerate the time to solution for their AI projects. The company continues to scale its operations, with 28 data centers expected to be operational by the end of 2024.
- First to market with NVIDIA H200 Tensor Core GPUs.
- H200 GPUs offer up to 1.9X higher inference performance.
- Integrated with Intel Xeon CPUs and NVIDIA Quantum-2 InfiniBand.
- Mission Control platform enhances system reliability and efficiency.
- CoreWeave scaling operations with 28 data centers by end of 2024.
“CoreWeave is dedicated to pushing the boundaries of AI development and, through our long-standing collaboration with NVIDIA, is now first to market with high-performance, scalable, and resilient infrastructure with NVIDIA H200 GPUs,” said Michael Intrator, CEO and co-founder of CoreWeave.







