Arm expanded its partnership with NVIDIA to bring full NVLink Fusion support to the Neoverse compute platform, enabling ecosystem partners to integrate Arm-based CPUs with a wide range of AI accelerators using a coherent, high-bandwidth interface. The collaboration extends the CPU–GPU co-design approach first used in Grace Hopper and Grace Blackwell to all Neoverse licensees, as AI data-center demand pushes providers to build systems optimized around energy efficiency rather than peak performance alone. Arm said Neoverse now spans more than one billion deployed cores and is on track to reach 50% market share among major hyperscalers in 2025, with AI-focused builds — including next-generation super-clusters such as OpenAI’s Stargate project — anchoring on Arm architectures.
By aligning NVLink Fusion with Arm’s updated AMBA CHI C2C protocol, the companies aim to eliminate memory and bandwidth bottlenecks that constrain today’s large-scale AI training and inference systems. Ecosystem partners will be able to attach their preferred accelerators while maintaining cache coherency and rack-scale bandwidth, reducing integration time and enabling differentiated designs for AI, HPC, and cloud platforms. The move also gives third-party silicon developers access to the coherent interconnect used by NVIDIA’s own GB200-class products.
Partners such as AWS, Google, Microsoft, Oracle, and Meta already rely on Neoverse across their cloud platforms, and the companies said momentum for Grace Blackwell-class architectures is accelerating demand for broader NVLink Fusion adoption. Arm positioned the expansion as part of a long-term shift toward “intelligence per watt” as the defining metric of AI-data-center efficiency.
• Neoverse now exceeds one billion deployed cores and is projected to reach 50% hyperscaler market share in 2025
• NVLink Fusion integrates with Arm’s updated AMBA CHI C2C for coherent CPU-accelerator connectivity
• Grace Hopper and Grace Blackwell architectures serve as reference designs for ecosystem adoption
• Partners can attach custom or third-party accelerators with full coherency and high bandwidth
• Focus areas include removing memory bottlenecks, reducing system power, and accelerating time-to-market
“Arm and NVIDIA are working together to set a new standard for AI infrastructure,” said Rene Haas, CEO of Arm. “Extending the Arm Neoverse platform with NVIDIA NVLink Fusion brings Grace Blackwell-class performance to every partner building on Arm.”
🌐 Analysis
Arm’s move to natively support NVIDIA NVLink Fusion across the Neoverse ecosystem signals a broader industry shift toward coherent, rack-scale AI architectures where CPU, GPU, and custom accelerators operate as a unified memory and bandwidth domain. NVLink Fusion effectively brings the Grace Hopper/Grace Blackwell co-design model to third-party silicon partners, enabling heterogeneous accelerators to plug into NVIDIA’s high-speed interconnect while maintaining full coherency. This aligns with hyperscaler requirements for lower energy per token and minimized data movement—now one of the largest cost drivers in AI training clusters.
The partner landscape for NVLink Fusion includes server OEMs, accelerator vendors, and cloud infrastructure providers pursuing Arm-based and mixed-architecture platforms. Partners working with NVLink Fusion or aligned interfaces include:
• Ayar Labs, Marvell, Broadcom, and Synopsys (PHY, SerDes, and IP integration)
• Lenovo, Supermicro, ASUS, Gigabyte, Foxconn, and WiWynn (system and server platforms)
• Ampere Computing, Fujitsu, SiPearl, Tenstorrent, and Ventana Micro Systems (Arm-based compute platforms and custom SoCs)
• MemVerge, Panmnesia, Astera Labs, and Rivos (accelerator adjacency, memory controllers, and CXL-based subsystems)
• Hyperscalers including AWS, Google, Microsoft, Oracle, Meta, and Tencent integrating Arm CPUs with NVIDIA GPUs and/or NVLink fabric in various internal platforms
• Telecom/edge partners such as Nokia, Ericsson, Samsung, and NEC leveraging Arm-NVIDIA architectures for vRAN and AI edge nodes
While NVIDIA does not publicly publish the full NVLink Fusion partner list, integrations around AMBA CHI C2C, LP-Granite, and GB200-class systems indicate growing adoption among both silicon IP vendors and full-rack solution providers.
VNVIDIA’s strategy centers on expanding NVLink Fusion from a GPU-to-GPU and CPU-to-GPU technology into a full rack-scale fabric that defines the architecture of future AI factories. This creates a standardized, coherent topology in which every participating device—CPU, GPU, memory expansion unit, optical I/O module, or third-party accelerator—connects through a common bandwidth and coherency model. The goal is to position NVLink Fusion as the default alternative to PCIe/CXL for high-performance, tightly coupled AI workloads that require large shared memory pools and near-zero overhead in CPU–accelerator switching. NVIDIA gains wider lock-in to its rack architecture, irrespective of whether customers choose Grace, Neoverse, or custom Arm-based CPUs.
🌐 We’re tracking the latest developments in semiconductors. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/







