Check out OFC Conference and Exposition 2025 videos here: https://ngi.fyi/ofc25yt
How is the industry accelerating GPU connectivity for AI applications?

Kurtis Bowman, Chairman from UALink Consortium explains:
– Industry collaboration is driving unprecedented speed in advancing from 200G to 400G connectivity standards
– UALink’s new specification enables multiple GPUs to function as one unified system through high-bandwidth, low-latency connections
– Standardization allows interoperability between GPUs and switches from different vendors, supported by compliance programs
- UALink (Ultra Accelerator Link) is a new open industry standard designed to enable high-speed, low-latency interconnects between AI accelerators such as GPUs, custom silicon, and XPUs. UALink 1.0 supports 1.5 Tbps of bidirectional bandwidth per link and can scale to connect up to 1,024 accelerators within a single system. The standard is built on an electrical signaling foundation similar to PCIe 6.0 and leverages short-reach copper connections using linear equalization to reduce power consumption. It emphasizes simplicity and determinism in the fabric, using a switch-based architecture for low-latency message-passing and memory semantics. The protocol supports cache-coherent and non-coherent operations, offering flexibility for different accelerator architectures.
- The primary use case for UALink is to support the massive inter-GPU communication requirements of AI training clusters and inference platforms. Large-scale AI workloads such as LLM training and multi-modal generative models demand low-latency, high-bandwidth communication across thousands of accelerators. UALink’s direct communication and reduced overhead enable higher efficiency compared to traditional Ethernet or InfiniBand setups. Beyond AI training, UALink is poised to support high-performance computing (HPC) and advanced simulation workloads where tightly coupled accelerators are critical. The fabric enables disaggregated architectures, rack-level pooling, and seamless accelerator memory sharing, all essential for next-generation data center topologies.
- The UALink Consortium was formed in 2024 by AMD, Intel, Microsoft, Google, Meta, and several others, with the goal of fostering an open, royalty-free standard for accelerator interconnects. The group is governed by the UALink Promoter Group and plans to release UALink 1.1 in 2025 with support for multi-host communication and enhanced routing capabilities. The consortium aims to ensure broad ecosystem support across silicon, systems, and software stacks, offering an alternative to NVIDIA’s NVLink and NVSwitch technologies. The open nature of UALink is intended to drive innovation and reduce vendor lock-in, especially as hyperscalers and enterprises build increasingly heterogeneous AI infrastructure.
Want to be involved our video series? Contact info@nextgeninfra.io
https://ngi.fyi/oif448-ualink-kurtis






