The Open Compute Project Foundation (OCP) announced an expansion of its Open Systems for AI Strategic Initiative with key contributions from NVIDIA and Meta. This initiative, originally launched in January 2024, aims to establish open standards for AI clusters and data center infrastructure, with a focus on efficiency and sustainability. NVIDIA’s contributions include the MGX-based GB200-NVL72 Platform, while Meta is developing the Catalina AI Rack architecture, both working toward creating a multi-vendor AI cluster supply chain.
NVIDIA’s hardware contributions focus on enhancing compute density and liquid cooling for AI clusters, including reinforced rack architectures and 1RU liquid-cooled compute trays. These components are designed to support a variety of vendors while maintaining interoperability across power, cooling, and mechanical interfaces. Meta’s efforts, including its AI rack architecture, aim to offer scalable solutions for high-density AI systems. Both companies, along with other OCP partners, are driving efforts to address challenges such as power density, advanced cooling, and low-latency interconnects for AI deployments.
The OCP Community, with participation from companies like Intel, Microsoft, and Google, continues to push open hardware solutions for AI, helping to streamline data center deployments and reduce costs. “NVIDIA’s contributions ensure high compute density racks and trays are interoperable across vendors, accelerating innovation in the open hardware ecosystem,” said Robert Ober, Chief Platform Architect at NVIDIA.
Key Points:
• The OCP expands its Open Systems for AI Initiative with contributions from NVIDIA and Meta.
• NVIDIA introduces MGX-based GB200-NVL72 rack designs and liquid-cooled compute trays.
• Meta contributes the Catalina AI Rack architecture for high-density AI clusters.
• OCP Community addresses AI deployment challenges such as power density, advanced cooling, and low-latency interconnects.
• Focus on building a multi-vendor AI cluster supply chain with open standards.
“We strongly welcome the efforts of the entire OCP Community and the Meta and NVIDIA contributions at a time when AI is becoming the dominant use case driving the next wave of data center build-outs. It expands the OCP Community’s collaboration to deliver large-scale high-performance computing clusters tuned for AI. The OCP, with its Open Systems for AI Strategic Initiative, will impact the entire market with a multi-vendor open AI cluster supply chain that has been vetted by hyperscale deployments and optimized by the OCP Community. This significantly reduces the risk and costs for other market segments to follow, removes the silos, and is very much aligned with OCP’s mission to build collaborative communities that will streamline deployment of new hardware and reduce time-to-market for adoption at scale,” said George Tchaparian, CEO at the Open Compute Project Foundation.







