• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Hot Interconnects: UALink for Rack-Scale AI Interconnects

Hot Interconnects: UALink for Rack-Scale AI Interconnects

August 25, 2025
in All
A A

At the IEEE Hot Interconnects conference, Sharada Yeluri of Astera Labs outlined why UALink (Ultra Accelerator Link) is emerging as the open protocol standard for rack-scale AI infrastructure. Designed specifically for connecting XPUs within a rack, UALink promises ultra-low latency, deterministic performance, and bandwidth efficiency far beyond what Ethernet can deliver in this domain.

Yeluri emphasized that AI workloads require 12x more bandwidth than scale-out networks because collective communication between XPUs dominates during training and inference. With load/store semantics, memory across all accelerators must appear unified, making low jitter and nanosecond-level latency essential. UALink achieves this with a single-stage, non-blocking fabric that provides lossless flow control, memory consistency, and greater than 95% bandwidth efficiency.

The UALink protocol stack is structured in four layers:

  • UPLI (Protocol Layer) – Defines read/write and atomic operations with security built in.
  • Transaction Layer – Breaks down commands into 64B flits, adds address compression.
  • Data Link Layer – Packs multiple flits efficiently, supports retries and flow control.
  • Physical Layer – Built on Ethernet PHY/SerDes, leveraging IEEE P802.3dj for 200 GT/s signaling.

Compared to Ethernet, UALink demonstrates clear advantages for rack-scale AI:

  • Efficiency: >95% link utilization versus Ethernet’s variable packet overhead.
  • Latency: Fixed 640B flits and simplified switching reduce jitter and response time.
  • Power: Switch power is expected to be 20–30% lower.
  • Ordering: Native strict and relaxed ordering enables flexibility not easily achieved with Ethernet.
  • Reliability: Built-in request/response isolation, hop-by-hop flow control, and FEC.

Yeluri positioned UALink as essential for a multi-vendor accelerator ecosystem, calling on developers to join the consortium and help shape the forthcoming UALink 2.0 standard. “There is a need for an open accelerator interface protocol with security built in natively,” she said. “UALink is the one.”

🌐 Analysis: UALink’s push at HOTI underscores a critical shift in AI infrastructure design. While Ethernet and InfiniBand dominate scale-out networking, rack-scale AI training places unique requirements on latency, determinism, and efficiency. UALink is carving out this space by providing a purpose-built interconnect that avoids Ethernet’s overhead while maintaining compatibility at the PHY level. The initiative, backed by Astera Labs, AMD, Broadcom, Google, Intel, Meta, and Microsoft, positions UALink as the open alternative to proprietary GPU interconnects such as NVIDIA’s NVLink or AMD’s Infinity Fabric. If the consortium delivers on roadmap promises, UALink could become the de facto standard for rack-scale AI fabrics.

🌐 We’re tracking the latest developments in AI infrastructure interconnects. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/

Tags: HOTIUALink
ShareTweetShare
Previous Post

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency

Next Post

NVIDIA Launches Spectrum-XGS Ethernet to Link Distributed Data Centers 

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency
AI Infrastructure

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency

August 25, 2025
Hot Interconnects: Celestial AI’s Photonic Fabric
AI Infrastructure

Hot Interconnects: Celestial AI’s Photonic Fabric

August 25, 2025
Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters
AI Infrastructure

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

August 22, 2025
Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI
AI Infrastructure

Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI

August 22, 2025
Hot Interconnects: UCIe Brings On-Package Memory to the Forefront
All

Hot Interconnects: UCIe Brings On-Package Memory to the Forefront

August 22, 2025
Hot Interconnects: Microsoft Maps Out $100B AI Networking Fabric
AI Infrastructure

Hot Interconnects: Microsoft Maps Out $100B AI Networking Fabric

August 21, 2025
Next Post
NVIDIA Launches Spectrum-XGS Ethernet to Link Distributed Data Centers 

NVIDIA Launches Spectrum-XGS Ethernet to Link Distributed Data Centers 

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version