• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » NVIDIA Contributes Blackwell Platform Design to OCP

NVIDIA Contributes Blackwell Platform Design to OCP

October 15, 2024
in All
A A

NVIDIA has announced a significant contribution to the Open Compute Project (OCP) by sharing foundational design elements of its Blackwell accelerated computing platform, aimed at driving open, scalable, and efficient data center technologies for AI infrastructure. Unveiled at the OCP Global Summit, the shared designs include the electro-mechanical architecture of the NVIDIA GB200 NVL72 system, featuring components such as rack architecture, compute and switch tray mechanicals, liquid-cooling specifications, and NVIDIA NVLink™ cable cartridge volumetrics. This contribution is intended to increase compute density and networking bandwidth, allowing data centers to better manage the growing demands of AI workloads.

The NVIDIA GB200 NVL72 system is built on NVIDIA’s MGX modular architecture, which enables rapid and cost-effective assembly of custom data center infrastructure. This modular approach supports a variety of configurations to meet different AI use cases. The GB200 NVL72 design connects 36 NVIDIA Grace™ CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale system that integrates seamlessly with high-performance liquid-cooling solutions. This configuration forms a 72-GPU NVIDIA NVLink domain that functions as a unified GPU, delivering up to 30 times faster real-time inference on trillion-parameter large language models compared to the NVIDIA H100 Tensor Core GPU. This boost in performance is critical for AI factories that require immense processing power for generative AI, natural language processing, and machine learning tasks.


Complementing this is NVIDIA’s expanded alignment of its Spectrum-X Ethernet networking platform with OCP standards. Spectrum-X now fully supports the Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC), allowing organizations to leverage open networking software while benefiting from NVIDIA’s advanced hardware capabilities. The platform’s adaptive routing and telemetry-based congestion control mechanisms optimize Ethernet performance, enhancing throughput and reducing latency for scale-out AI infrastructures. The next-generation NVIDIA ConnectX-8 SuperNICs, which are part of the Spectrum-X platform, are optimized for massive AI workloads with programmable packet processing engines. These SuperNICs support accelerated networking at speeds of up to 800Gbps, offering significant improvements in network flexibility and performance for AI data centers. ConnectX-8 SuperNICs will be available in the OCP 3.0 form factor starting next year, enabling companies to build high-performance networks while maintaining software consistency.


NVIDIA said its commitment to open standards is further evidenced by its collaborations with major technology players like Meta. Meta is set to contribute its Catalina AI rack architecture, based on NVIDIA’s GB200 NVL72 system, to the OCP. This integration allows computer makers to build high compute density systems tailored to the increasing performance and energy efficiency demands of modern data centers. Meta’s partnership with NVIDIA highlights the flexibility of the Blackwell platform, which enables tech companies to innovate on top of open designs and deploy customized AI infrastructure solutions.


Beyond Meta, NVIDIA’s work on OCP standards builds on over a decade of collaboration. Previous contributions, such as the NVIDIA HGX™ H100 baseboard design, have paved the way for a broader selection of offerings from hardware manufacturers. This is especially significant as the industry transitions from general-purpose computing to AI-accelerated infrastructure, where hardware, software, and networking need to work in unison to handle the complexities of AI-driven workloads.


• Contributed GB200 NVL72 system design to OCP, focusing on rack architecture, liquid-cooling, and networking components.


• Modular MGX™ architecture supports flexible data center configurations, connecting 36 Grace™ CPUs and 72 Blackwell GPUs.


• System creates a 72-GPU NVIDIA NVLink domain, offering 30x faster real-time trillion-parameter inference compared to the H100 Tensor Core GPU.


• Spectrum-X platform aligns with OCP standards, supporting SAI and SONiC for open networking.


• Next-gen ConnectX-8 SuperNICs deliver up to 800Gbps of accelerated networking for large AI workloads.


• Meta to adopt the GB200 NVL72 design in its Catalina AI rack, contributing it to OCP.


• NVIDIA’s prior OCP contributions include the HGX H100 baseboard design, expanding AI adoption across the industry.

ShareTweetShare
Previous Post

Marvell Demos 3nm PCIe Gen 7 at OCP 2024

Next Post

Marvell Introduces Custom 5nm Ethernet NIC for Meta

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
Marvell Introduces Custom 5nm Ethernet NIC for Meta

Marvell Introduces Custom 5nm Ethernet NIC for Meta

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version