• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Thursday, April 16, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Marvell Debuts Custom UALink Interconnect for Rack-Scale AI Systems

Marvell Debuts Custom UALink Interconnect for Rack-Scale AI Systems

June 11, 2025
in Semiconductors
A A

Marvell has introduced a custom UALink scale-up solution aimed at enabling rack-scale AI infrastructure with high compute utilization, low latency, and power efficiency. The announcement expands Marvell’s custom compute platform portfolio, targeting hyperscalers seeking open-standard, scale-up interconnects between thousands of AI accelerators and switches. The new solution leverages Marvell’s portfolio of 224G SerDes, UALink physical and controller IP, scalable low-latency fabric cores, and advanced co-packaging options.

Designed to support open standards, the UALink architecture allows compute vendors to integrate UALink controllers into custom accelerators and switches, optimizing AI system design for latency and power. Marvell’s offering also supports flexible topologies with a toolkit approach for customers to scale AI workloads at the rack level and beyond. The move aligns with growing demand for tightly-coupled, efficient interconnect solutions as hyperscalers push toward next-generation large-scale AI training and inference environments.

Marvell is a founding member of the UALink Consortium, an industry group formed to establish open specifications for direct accelerator-to-accelerator connectivity. The new custom UALink product suite is positioned to complement AMD and other partners’ efforts to build standards-driven, high-performance AI systems.

• Marvell unveils custom UALink scale-up offering for rack-scale AI infrastructure

• Solution includes 224G SerDes, UALink controller IP, switch core and co-packaged optics

• Supports open standards-based, low-latency accelerator interconnects

• Enables scalable deployment of hundreds to thousands of AI accelerators

• Builds on Marvell’s custom silicon capabilities and packaging innovations

“We are pleased to introduce our new custom UALink offering to enable the next generation of AI scale-up systems,” said Nick Kucharewski, SVP and GM of Marvell’s Cloud Platform Business Unit.

  • UALink is an open-standard interconnect introduced in 2024 to enable high-bandwidth, low-latency communication between AI accelerators, such as GPUs and custom ASICs, within a server or across a rack. Designed for scale-up AI infrastructure, it complements scale-out solutions like Ethernet or InfiniBand by optimizing intra-rack connectivity. Spearheaded by the UALink Consortium—including AMD, Intel, Marvell, Broadcom, Cisco, and Meta—UALink promotes interoperability through an open ecosystem, offering an alternative to proprietary solutions like NVIDIA’s NVLink. The UALink 1.0 specification, released in early 2025, supports memory coherence, reduced data movement overhead, and simplified software integration, leveraging compatibility with PCIe and CXL. Multi-vendor interoperability testing and reference designs are underway, with initial product deployments expected in late 2025 or early 2026 to support next-generation AI training clusters.
Tags: MarvellUALink
ShareTweetShare
Previous Post

Nokia Selects AMD EPYC CPUs to Power Cloud Platform for 5G 

Next Post

Muon Space Raises $146M to Scale Satellite Constellations

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Marvell Adds Active Copper Cable Equalizers
All

Marvell Adds Active Copper Cable Equalizers

October 14, 2025
Matt Murphy appointed Chair of Marvell’s Board
Optical

ECOC25: Marvell Highlights CPO, 800G COLORZ, and 1.6T PAM4 DSP

September 25, 2025
Matt Murphy appointed Chair of Marvell’s Board
Semiconductors

Marvell Expands Share Repurchase by $5B, CEO Outlines AI and Data Center Pipeline

September 24, 2025
Marvell pushes ahead to 2nm with TSMC
Semiconductors

Marvell Expands CXL with CPU and DRAM Interoperability

September 2, 2025
Matt Murphy appointed Chair of Marvell’s Board
Financials

Marvell Doubles Down on AI With Record $2B Quarter and Optical Milestones

August 28, 2025
Marvell Debuts 64 Gbps Bi-Directional Die-to-Die Interface in 2nm
All

Marvell Debuts 64 Gbps Bi-Directional Die-to-Die Interface in 2nm

August 26, 2025
Next Post
Muon Space Raises $146M to Scale Satellite Constellations

Muon Space Raises $146M to Scale Satellite Constellations

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version