• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Wednesday, April 15, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Arm Extends Neoverse With NVIDIA NVLink Fusion

Arm Extends Neoverse With NVIDIA NVLink Fusion

November 17, 2025
in Semiconductors
A A

Arm expanded its partnership with NVIDIA to bring full NVLink Fusion support to the Neoverse compute platform, enabling ecosystem partners to integrate Arm-based CPUs with a wide range of AI accelerators using a coherent, high-bandwidth interface. The collaboration extends the CPU–GPU co-design approach first used in Grace Hopper and Grace Blackwell to all Neoverse licensees, as AI data-center demand pushes providers to build systems optimized around energy efficiency rather than peak performance alone. Arm said Neoverse now spans more than one billion deployed cores and is on track to reach 50% market share among major hyperscalers in 2025, with AI-focused builds — including next-generation super-clusters such as OpenAI’s Stargate project — anchoring on Arm architectures.

By aligning NVLink Fusion with Arm’s updated AMBA CHI C2C protocol, the companies aim to eliminate memory and bandwidth bottlenecks that constrain today’s large-scale AI training and inference systems. Ecosystem partners will be able to attach their preferred accelerators while maintaining cache coherency and rack-scale bandwidth, reducing integration time and enabling differentiated designs for AI, HPC, and cloud platforms. The move also gives third-party silicon developers access to the coherent interconnect used by NVIDIA’s own GB200-class products.

Partners such as AWS, Google, Microsoft, Oracle, and Meta already rely on Neoverse across their cloud platforms, and the companies said momentum for Grace Blackwell-class architectures is accelerating demand for broader NVLink Fusion adoption. Arm positioned the expansion as part of a long-term shift toward “intelligence per watt” as the defining metric of AI-data-center efficiency.

• Neoverse now exceeds one billion deployed cores and is projected to reach 50% hyperscaler market share in 2025

• NVLink Fusion integrates with Arm’s updated AMBA CHI C2C for coherent CPU-accelerator connectivity

• Grace Hopper and Grace Blackwell architectures serve as reference designs for ecosystem adoption

• Partners can attach custom or third-party accelerators with full coherency and high bandwidth

• Focus areas include removing memory bottlenecks, reducing system power, and accelerating time-to-market

“Arm and NVIDIA are working together to set a new standard for AI infrastructure,” said Rene Haas, CEO of Arm. “Extending the Arm Neoverse platform with NVIDIA NVLink Fusion brings Grace Blackwell-class performance to every partner building on Arm.”

🌐  Analysis

Arm’s move to natively support NVIDIA NVLink Fusion across the Neoverse ecosystem signals a broader industry shift toward coherent, rack-scale AI architectures where CPU, GPU, and custom accelerators operate as a unified memory and bandwidth domain. NVLink Fusion effectively brings the Grace Hopper/Grace Blackwell co-design model to third-party silicon partners, enabling heterogeneous accelerators to plug into NVIDIA’s high-speed interconnect while maintaining full coherency. This aligns with hyperscaler requirements for lower energy per token and minimized data movement—now one of the largest cost drivers in AI training clusters.

The partner landscape for NVLink Fusion includes server OEMs, accelerator vendors, and cloud infrastructure providers pursuing Arm-based and mixed-architecture platforms. Partners working with NVLink Fusion or aligned interfaces include:

• Ayar Labs, Marvell, Broadcom, and Synopsys (PHY, SerDes, and IP integration)

• Lenovo, Supermicro, ASUS, Gigabyte, Foxconn, and WiWynn (system and server platforms)

• Ampere Computing, Fujitsu, SiPearl, Tenstorrent, and Ventana Micro Systems (Arm-based compute platforms and custom SoCs)

• MemVerge, Panmnesia, Astera Labs, and Rivos (accelerator adjacency, memory controllers, and CXL-based subsystems)

• Hyperscalers including AWS, Google, Microsoft, Oracle, Meta, and Tencent integrating Arm CPUs with NVIDIA GPUs and/or NVLink fabric in various internal platforms

• Telecom/edge partners such as Nokia, Ericsson, Samsung, and NEC leveraging Arm-NVIDIA architectures for vRAN and AI edge nodes

While NVIDIA does not publicly publish the full NVLink Fusion partner list, integrations around AMBA CHI C2C, LP-Granite, and GB200-class systems indicate growing adoption among both silicon IP vendors and full-rack solution providers.

VNVIDIA’s strategy centers on expanding NVLink Fusion from a GPU-to-GPU and CPU-to-GPU technology into a full rack-scale fabric that defines the architecture of future AI factories. This creates a standardized, coherent topology in which every participating device—CPU, GPU, memory expansion unit, optical I/O module, or third-party accelerator—connects through a common bandwidth and coherency model. The goal is to position NVLink Fusion as the default alternative to PCIe/CXL for high-performance, tightly coupled AI workloads that require large shared memory pools and near-zero overhead in CPU–accelerator switching. NVIDIA gains wider lock-in to its rack architecture, irrespective of whether customers choose Grace, Neoverse, or custom Arm-based CPUs.

🌐 We’re tracking the latest developments in semiconductors. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/

Tags: ARMNvidia
ShareTweetShare
Previous Post

NVIDIA NVQLink Sees Adoption by National Labs

Next Post

PicoJool Targets Low-Cost, High-Bandwidth VCSELs

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud
AI Infrastructure

Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud

November 6, 2025
Arm debuts Neural Processing Unit and Cortex-M55
Financials

Arm Reports 34% Revenue Growth as AI Era Drives Demand

November 5, 2025
Forescout Unveils Real-Time Detection Tech for Non-Quantum-Safe Encryption
Quantum

NVQLink: NVIDIA’s Bridge to Quantum Supercomputing

November 1, 2025
NVIDIA Fuels Korea’s AI Factory Boom
AI Infrastructure

NVIDIA Fuels Korea’s AI Factory Boom

November 1, 2025
The Megawatt Shift: NVIDIA’s 800 VDC Strategy
Data Centers

The Megawatt Shift: NVIDIA’s 800 VDC Strategy

November 1, 2025
NVIDIA Launches BlueField-4 DPU
Data Centers

NVIDIA Launches BlueField-4 DPU

October 30, 2025
Next Post
PicoJool Targets Low-Cost, High-Bandwidth VCSELs

PicoJool Targets Low-Cost, High-Bandwidth VCSELs

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version