• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Wednesday, April 15, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » GPU Networking Evolution: From 200G to 400G

GPU Networking Evolution: From 200G to 400G

June 27, 2025
in Video
A A

Check out OFC Conference and Exposition 2025 videos here: https://ngi.fyi/ofc25yt

How is the industry accelerating GPU connectivity for AI applications?

Kurtis Bowman, Chairman from UALink Consortium explains:

– Industry collaboration is driving unprecedented speed in advancing from 200G to 400G connectivity standards
– UALink’s new specification enables multiple GPUs to function as one unified system through high-bandwidth, low-latency connections
– Standardization allows interoperability between GPUs and switches from different vendors, supported by compliance programs

  • UALink (Ultra Accelerator Link) is a new open industry standard designed to enable high-speed, low-latency interconnects between AI accelerators such as GPUs, custom silicon, and XPUs. UALink 1.0 supports 1.5 Tbps of bidirectional bandwidth per link and can scale to connect up to 1,024 accelerators within a single system. The standard is built on an electrical signaling foundation similar to PCIe 6.0 and leverages short-reach copper connections using linear equalization to reduce power consumption. It emphasizes simplicity and determinism in the fabric, using a switch-based architecture for low-latency message-passing and memory semantics. The protocol supports cache-coherent and non-coherent operations, offering flexibility for different accelerator architectures.
  • The primary use case for UALink is to support the massive inter-GPU communication requirements of AI training clusters and inference platforms. Large-scale AI workloads such as LLM training and multi-modal generative models demand low-latency, high-bandwidth communication across thousands of accelerators. UALink’s direct communication and reduced overhead enable higher efficiency compared to traditional Ethernet or InfiniBand setups. Beyond AI training, UALink is poised to support high-performance computing (HPC) and advanced simulation workloads where tightly coupled accelerators are critical. The fabric enables disaggregated architectures, rack-level pooling, and seamless accelerator memory sharing, all essential for next-generation data center topologies.
  • The UALink Consortium was formed in 2024 by AMD, Intel, Microsoft, Google, Meta, and several others, with the goal of fostering an open, royalty-free standard for accelerator interconnects. The group is governed by the UALink Promoter Group and plans to release UALink 1.1 in 2025 with support for multi-host communication and enhanced routing capabilities. The consortium aims to ensure broad ecosystem support across silicon, systems, and software stacks, offering an alternative to NVIDIA’s NVLink and NVSwitch technologies. The open nature of UALink is intended to drive innovation and reduce vendor lock-in, especially as hyperscalers and enterprises build increasingly heterogeneous AI infrastructure.

Want to be involved our video series? Contact info@nextgeninfra.io
https://ngi.fyi/oif448-ualink-kurtis

ShareTweetShare
Previous Post

AST SpaceMobile and Fairwinds Demo Direct-to-Device Satellite for U.S. Defense

Next Post

HPE Aruba Integrates Multi-Vendor Security in Aruba Central

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
HPE Aruba Integrates Multi-Vendor Security in Aruba Central

HPE Aruba Integrates Multi-Vendor Security in Aruba Central

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version