Marvell Technology has announced a collaboration with NVIDIA to integrate NVLink Fusion into its custom cloud platform silicon, offering hyperscalers greater flexibility to build advanced AI infrastructure. NVLink Fusion, launched earlier this week by NVIDIA, is a chiplet-based interconnect technology enabling custom processors to interface with NVIDIA’s full-stack AI platform, including GPUs, rack-scale hardware, and networking components. The Marvell-NVIDIA partnership aims to reduce time to deployment for AI factories by supporting seamless scale-up and scale-out architectures tailored to specific customer requirements.
Marvell brings a portfolio of design capabilities to the table, including advanced SerDes, 2D/3D die-to-die interconnects, silicon photonics, co-packaged optics, HBM integration, and PCIe Gen7 interfaces. These features, combined with NVIDIA’s 1.8 TB/s bidirectional NVLink chiplet, give hyperscalers a high-bandwidth, low-latency path to interconnect custom XPUs with NVIDIA GPUs in next-gen AI data centers. The collaboration targets demanding AI workloads, including large-scale model training and agentic inference, with an emphasis on energy-efficient, scalable deployments.
The joint solution positions NVLink Fusion as a catalyst for heterogeneous AI architectures. Cloud providers can now combine their proprietary accelerators with NVIDIA’s ecosystem, including Spectrum-X Ethernet and Quantum-X800 InfiniBand switches, within a unified infrastructure model. This marks a significant milestone in the evolution of AI factory integration by enabling greater customization while preserving compatibility with NVIDIA’s AI orchestration software and rack-scale systems.
- Marvell partners with NVIDIA to deploy NVLink Fusion in custom AI silicon.
- NVLink Fusion delivers 1.8 TB/s bidirectional bandwidth for chiplet-based interconnects.
- Supports hyperscaler-specific scale-up and scale-out architectures for model training and agentic inference.
- Marvell’s portfolio includes SerDes, advanced packaging, silicon photonics, HBM, PCIe Gen7, and SoC fabrics.
- Enables tighter integration of proprietary XPUs with NVIDIA GPUs and networking stack.
“Through this collaboration, we offer customers the flexibility to rapidly deploy scalable AI infrastructure with the bandwidth, performance and reliability required to support advanced AI models,” said Nick Kucharewski, SVP and GM of Marvell’s Cloud Platform Business Unit.
