• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 17, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » UALink 1.0 Released for Low-Latency Scale-Up AI Accelerators

UALink 1.0 Released for Low-Latency Scale-Up AI Accelerators

April 8, 2025
in Semiconductors
A A

The UALink Consortium officially released the UALink™ 200G 1.0 Specification, establishing an open industry standard for low-latency, high-bandwidth interconnects designed to scale AI accelerator performance in next-generation data center clusters. This specification enables 200G per lane communication between up to 1,024 accelerators within a single AI computing pod, providing the foundational building block for future high-performance AI and HPC architectures.

UALink™ is a memory-semantic interconnect optimized for scale-up workloads, delivering deterministic performance with 93% effective peak bandwidth while reducing latency, power, and total cost of ownership. It supports direct accelerator-to-accelerator communication using load/store, atomic, and memory-access operations across multiple system nodes. Unlike traditional interconnects, UALink is designed to minimize complexity and maximize bandwidth utilization using smaller die area and simplified switch design.

Formed in October 2024, the UALink Consortium represents over 85 companies, including founding members Alibaba, AMD, Apple, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta, Microsoft, and Synopsys. The consortium’s mission is to build an open ecosystem that accelerates AI infrastructure innovation through standardized, interoperable technologies. With the ratification of the 1.0 specification, the UALink Consortium opens the door for vendors to build compatible accelerators, switches, and pods optimized for emerging AI scaling demands.

• Technical Specifications of UALink 200G 1.0:

• 200G per lane interconnect supporting up to 1,024 accelerators per AI pod

• Memory-semantic load/store protocol with support for read, write, and atomic operations

• Achieves 93% effective peak bandwidth

• Offers latency comparable to PCIe with raw speed matching Ethernet

• Design Benefits:

• Low-power architecture through efficient switch design and minimal die area

• Lower total cost of ownership via reduced complexity and improved bandwidth utilization

• Supports multi-node systems with deterministic performance scaling

• Ecosystem and Industry Support:

• Over 85 member companies contributing to the UALink Consortium

• Founding board includes Alibaba, AMD, Apple, AWS, Cisco, Google, HPE, Intel, Meta, Microsoft, Synopsys

• Publicly available specification encourages open innovation and multi-vendor interoperability

“UALink is the only memory semantic solution for scale-up AI optimized for lower power, latency and cost while increasing effective bandwidth,” said Kurtis Bowman, Board Chair of the UALink Consortium.


Tags: UALink
ShareTweetShare
Previous Post

IBM Unveils z17 Mainframe with On-Chip AI and Spyre Accelerator

Next Post

Finland Leads Trial of Cross-Border 5G SA Slicing

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Hot Interconnects: UALink for Rack-Scale AI Interconnects
All

Hot Interconnects: UALink for Rack-Scale AI Interconnects

August 25, 2025
Matt Murphy appointed Chair of Marvell’s Board
Semiconductors

Marvell Debuts Custom UALink Interconnect for Rack-Scale AI Systems

June 11, 2025
NVIDIA Video: Network Architecture for Scaling AI
Data Centers

UALink: Networking Massive GPU Clusters

February 12, 2025
NVIDIA Video: Network Architecture for Scaling AI
Data Centers

Data Center Networking for AI and Cloud

February 6, 2025
Broadcom Predictions for 2025:  Bright Prospects for Ethernet in AI Infrastructure
Video

Astera Labs Predictions: Driving Million-Node Clusters

January 16, 2025
Next Post
stc Carries 1Tbps Wavelength across 850km with Nokia

Finland Leads Trial of Cross-Border 5G SA Slicing

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version