Marvell has introduced a custom UALink scale-up solution aimed at enabling rack-scale AI infrastructure with high compute utilization, low latency, and power efficiency. The announcement expands Marvell’s custom compute platform portfolio, targeting hyperscalers seeking open-standard, scale-up interconnects between thousands of AI accelerators and switches. The new solution leverages Marvell’s portfolio of 224G SerDes, UALink physical and controller IP, scalable low-latency fabric cores, and advanced co-packaging options.
Designed to support open standards, the UALink architecture allows compute vendors to integrate UALink controllers into custom accelerators and switches, optimizing AI system design for latency and power. Marvell’s offering also supports flexible topologies with a toolkit approach for customers to scale AI workloads at the rack level and beyond. The move aligns with growing demand for tightly-coupled, efficient interconnect solutions as hyperscalers push toward next-generation large-scale AI training and inference environments.
Marvell is a founding member of the UALink Consortium, an industry group formed to establish open specifications for direct accelerator-to-accelerator connectivity. The new custom UALink product suite is positioned to complement AMD and other partners’ efforts to build standards-driven, high-performance AI systems.
• Marvell unveils custom UALink scale-up offering for rack-scale AI infrastructure
• Solution includes 224G SerDes, UALink controller IP, switch core and co-packaged optics
• Supports open standards-based, low-latency accelerator interconnects
• Enables scalable deployment of hundreds to thousands of AI accelerators
• Builds on Marvell’s custom silicon capabilities and packaging innovations
“We are pleased to introduce our new custom UALink offering to enable the next generation of AI scale-up systems,” said Nick Kucharewski, SVP and GM of Marvell’s Cloud Platform Business Unit.
- UALink is an open-standard interconnect introduced in 2024 to enable high-bandwidth, low-latency communication between AI accelerators, such as GPUs and custom ASICs, within a server or across a rack. Designed for scale-up AI infrastructure, it complements scale-out solutions like Ethernet or InfiniBand by optimizing intra-rack connectivity. Spearheaded by the UALink Consortium—including AMD, Intel, Marvell, Broadcom, Cisco, and Meta—UALink promotes interoperability through an open ecosystem, offering an alternative to proprietary solutions like NVIDIA’s NVLink. The UALink 1.0 specification, released in early 2025, supports memory coherence, reduced data movement overhead, and simplified software integration, leveraging compatibility with PCIe and CXL. Multi-vendor interoperability testing and reference designs are underway, with initial product deployments expected in late 2025 or early 2026 to support next-generation AI training clusters.
