• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 18, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Marvell Partners with NVIDIA on NVLink Fusion

Marvell Partners with NVIDIA on NVLink Fusion

May 19, 2025
in Semiconductors
A A

Marvell Technology has announced a collaboration with NVIDIA to integrate NVLink Fusion into its custom cloud platform silicon, offering hyperscalers greater flexibility to build advanced AI infrastructure. NVLink Fusion, launched earlier this week by NVIDIA, is a chiplet-based interconnect technology enabling custom processors to interface with NVIDIA’s full-stack AI platform, including GPUs, rack-scale hardware, and networking components. The Marvell-NVIDIA partnership aims to reduce time to deployment for AI factories by supporting seamless scale-up and scale-out architectures tailored to specific customer requirements.

Marvell brings a portfolio of design capabilities to the table, including advanced SerDes, 2D/3D die-to-die interconnects, silicon photonics, co-packaged optics, HBM integration, and PCIe Gen7 interfaces. These features, combined with NVIDIA’s 1.8 TB/s bidirectional NVLink chiplet, give hyperscalers a high-bandwidth, low-latency path to interconnect custom XPUs with NVIDIA GPUs in next-gen AI data centers. The collaboration targets demanding AI workloads, including large-scale model training and agentic inference, with an emphasis on energy-efficient, scalable deployments.

The joint solution positions NVLink Fusion as a catalyst for heterogeneous AI architectures. Cloud providers can now combine their proprietary accelerators with NVIDIA’s ecosystem, including Spectrum-X Ethernet and Quantum-X800 InfiniBand switches, within a unified infrastructure model. This marks a significant milestone in the evolution of AI factory integration by enabling greater customization while preserving compatibility with NVIDIA’s AI orchestration software and rack-scale systems.

  • Marvell partners with NVIDIA to deploy NVLink Fusion in custom AI silicon.
  • NVLink Fusion delivers 1.8 TB/s bidirectional bandwidth for chiplet-based interconnects.
  • Supports hyperscaler-specific scale-up and scale-out architectures for model training and agentic inference.
  • Marvell’s portfolio includes SerDes, advanced packaging, silicon photonics, HBM, PCIe Gen7, and SoC fabrics.
  • Enables tighter integration of proprietary XPUs with NVIDIA GPUs and networking stack.

“Through this collaboration, we offer customers the flexibility to rapidly deploy scalable AI infrastructure with the bandwidth, performance and reliability required to support advanced AI models,” said Nick Kucharewski, SVP and GM of Marvell’s Cloud Platform Business Unit.


Tags: Marvell
ShareTweetShare
Previous Post

Marvell: Rethinking Connectors with 2D Interfaces for the 448G Era

Next Post

Extreme Networks Intros AI-Integrated Enterprise Networking Platform

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Marvell Adds Active Copper Cable Equalizers
All

Marvell Adds Active Copper Cable Equalizers

October 14, 2025
Matt Murphy appointed Chair of Marvell’s Board
Optical

ECOC25: Marvell Highlights CPO, 800G COLORZ, and 1.6T PAM4 DSP

September 25, 2025
Matt Murphy appointed Chair of Marvell’s Board
Semiconductors

Marvell Expands Share Repurchase by $5B, CEO Outlines AI and Data Center Pipeline

September 24, 2025
Marvell pushes ahead to 2nm with TSMC
Semiconductors

Marvell Expands CXL with CPU and DRAM Interoperability

September 2, 2025
Matt Murphy appointed Chair of Marvell’s Board
Financials

Marvell Doubles Down on AI With Record $2B Quarter and Optical Milestones

August 28, 2025
Marvell Debuts 64 Gbps Bi-Directional Die-to-Die Interface in 2nm
All

Marvell Debuts 64 Gbps Bi-Directional Die-to-Die Interface in 2nm

August 26, 2025
Next Post
Extreme launches cloud-managed switches + Wi-Fi 7 AP

Extreme Networks Intros AI-Integrated Enterprise Networking Platform

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version