At the Open Compute Project (OCP) Global Summit 2025, Ihab Tarazi, Chief Technology Officer and SVP of Dell Technologies, outlined how Dell is using open standards to power the largest AI and high-performance computing (HPC) deployments worldwide. Over a decade of OCP collaboration has reshaped Dell’s compute roadmap, culminating in fully modular, open-spec architectures optimized for GPU-dense systems and liquid-cooled racks capable of reaching 480 kW—and scaling to 1 MW.
Tarazi described Dell’s evolution from early OCP contributions in NICs, storage, and networking to leading designs for the DCMH (Data Center Modular Hardware) and AxScale initiatives. The company has deployed hundreds of thousands of GPUs in OCP-based clusters, citing the explosive jump from 4,000-GPU clusters just a year ago to 100,000-GPU data halls today. Dell’s advanced Open Rack v3 systems support massive AI workloads, with innovations in cold plates, thermal interface materials, CDUs, and quick-disconnect manifolds enabling rapid 6-week deployment cycles for 50,000-GPU builds.
Dell also unveiled progress in its OCP-based HPC systems, including a 480 kW modular compute rack co-developed with Berkeley Lab under the U.S. Department of Energy. Each rack houses 27,000 CPU cores and 144 NVIDIA GB200 GPUs in a fully open, upgradeable design. Future systems will integrate NVIDIA’s Vera Rubin 2000 and AMD’s MI450 accelerators. The same architecture, Tarazi said, is being adopted for the University of Texas at Austin supercomputing program and sovereign cloud deployments.
• Dell has deployed hundreds of thousands of GPUs globally using OCP Open Rack v3 architecture, supporting some of the largest AI clusters ever built
• OCP-based racks deliver 480 kW of capacity per rack, with design headroom to scale to 1 MW for extreme AI and HPC loads
• Modular OCP DCMH design allows compute, storage, and networking to be upgraded independently, reducing refresh time from years to weeks
• Door heat exchanger enclosures lower power consumption by 60% and operate with non-chilled water, enabling high-density deployment in non-purpose-built facilities
• Dell’s open liquid cooling system includes advanced manifolds, high-flow cold plates, optimized thermal interface materials, and both in-house and partner-built CDUs
• Quick-disconnect innovations improve reliability and serviceability in high-flow GPU clusters, supporting continuous operation during component swaps
• Enhanced Open Rack v3 bus bars, CDU connectivity, and cable trays improve performance and simplify maintenance for large-scale AI installations
• Dell achieved 50,000-GPU deployments in just six weeks, and 100,000 in a few weeks more—demonstrating operational speed at hyperscale levels
• Collaboration with Berkeley Lab and the Department of Energy produced a 480 kW OCP HPC rack housing 27,000 CPU cores and 144 NVIDIA GB200 GPUs
• University of Texas at Austin and other sovereign HPC initiatives are adopting Dell’s OCP-based architecture for national research infrastructure
• Dell’s modular systems support both NVIDIA (B300, GB200, GB300, Vera Rubin 2000) and AMD (MI355X, MI450) accelerators, reflecting a multivendor open approach
• The systems are designed for future AI workloads with integrated optics, liquid-cooled networking, and a roadmap toward 1 MW rack power densities
“OCP is definitely the place to be,” said Ihab Tarazi. “We couldn’t have achieved this speed and scale without the open collaboration and innovation from the OCP community.”
🌐 Analysis: Dell’s 2025 OCP strategy underscores how open standards are driving hyperscale AI and HPC innovation. By marrying modular OCP compute, liquid cooling, and 480 kW-plus rack densities, Dell joins Meta, AMD, and others pushing open hardware beyond hyperscaler boundaries. The company’s work with DOE and UT Austin suggests that open infrastructure is now shaping the future of both public-sector supercomputing and private AI clusters.

Join our expert series on fabrics, optics, and systems powering AI-scale data centers
We’re producing interviews, explainers, and an expert report on the technologies and architectures enabling next-gen AI clusters—across silicon, optics, switching, software, orchestration, and operations.
- 1Who: Companies building real solutions—switches, NICs/DPUs, optics (pluggables, LPO, CPO), fabrics (RoCE, UCF, UALink, ESUN), telemetry, and orchestration platforms.
- 2What: 10–12 minute video spotlights, technical deep dives, and a curated industry report distributed to decision-makers and architects shaping AI infrastructure.
- 3Why: Reach hyperscaler and enterprise teams deploying AI fabrics, 800G/1.6T optics, liquid cooling, rack-scale compute, and network architectures for AI at scale.