AMD introduced “Helios,” its new rack-scale reference system for AI infrastructure, at the Open Compute Project (OCP) Global Summit in San Jose. Built on Meta’s newly submitted Open Rack Wide (ORW) specification, “Helios” represents AMD’s first fully open rack-scale platform and a major step in advancing interoperability and scalability across AI data centers. The ORW defines an open, double-wide rack optimized for power, cooling, and serviceability—addressing the growing needs of hyperscale AI facilities.
At the core of “Helios” are AMD’s Instinct MI450 GPUs, each featuring 432 GB of HBM4 memory and 19.6 TB/s of bandwidth. A full “Helios” rack with 72 GPUs delivers up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 compute, with 31 TB of total HBM memory and 1.4 PB/s of aggregate bandwidth. The system supports 260 TB/s of scale-up and 43 TB/s of Ethernet scale-out interconnect bandwidth—enabling high-performance, low-latency communication across clusters. AMD projects “Helios” will deliver 36× higher performance compared to its prior generation and 50% more memory capacity than NVIDIA’s Vera Rubin system.
AMD designed “Helios” to serve as a blueprint for OEM and ODM partners to build interoperable AI infrastructure aligned with OCP standards. Key innovations include backside quick-disconnect liquid cooling, a double-wide rack layout for better serviceability, and Ethernet-based scale-out networking for multipath resiliency. Volume deployment is expected in 2026, marking AMD’s deepening collaboration with Meta, OCP, and industry partners through initiatives like UALink and the Ultra Ethernet Consortium.
“‘Helios’ extends AMD’s open hardware philosophy from chip to cluster, enabling an industry-wide push toward standardized, energy-efficient AI infrastructure,” said the company.
🌐 Analysis: AMD’s “Helios” marks a clear escalation in the open-hardware race against NVIDIA’s proprietary rack systems. By embracing Meta’s ORW and leading roles in UALink and UEC, AMD is positioning itself as the champion of interoperability for exascale AI infrastructure. This launch also aligns with Meta’s open data center strategy and the industry’s broader shift toward standardized, liquid-cooled AI rack architectures.
🌐 We’re tracking the latest developments in semiconductors. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/






