• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » AI Infrastructure Summit: Broadcom’s Ethernet for Scale Up, Out & Across

AI Infrastructure Summit: Broadcom’s Ethernet for Scale Up, Out & Across

September 10, 2025
in AI Infrastructure, Semiconductors
A A

 In a keynote at the AI Infrastructure Summit, Broadcom’s Ram Velaga argued that Ethernet is the only viable foundation for AI systems that must scale across racks and data centers. With demand for 70+ million GPUs over the next five years—equivalent to 124 GW of new compute capacity—he said the industry must pivot to accelerators (XPUs) and networking architectures that prioritize bandwidth, efficiency, and reliability.

Velaga outlined three essential requirements for XPU scale-up: high bandwidth, efficient data transfer, and reliable connectivity (Slide 1). Current XPUs already deliver ~40 Tbps of HBM bandwidth, with next-generation designs expected to hit 100 Tbps. Connecting such devices efficiently requires networks that can handle tens of Tbps of I/O per accelerator—two orders of magnitude beyond today’s 100 Gbps CPU I/O.

Ethernet, Velaga stressed, uniquely enables scale-out AI systems because it decouples accelerator design from the transport layer. Using open IEEE 802.3 standards and Ultra Ethernet Consortium enhancements such as LLR, CBFC, and PFC, XPUs can communicate at terabit speeds while leaving room for vendor-specific innovation in memory access, scheduling, and load balancing (Slide 2). This clean separation of concerns ensures that AI clusters—whether confined to a rack, spanning rows of racks, or linking multiple data centers—can operate as a single distributed supercomputer (Slide 3).

Key takeaways from Velaga’s talk:

  • AI infrastructure must scale to 124 GW of compute, or ~70M GPUs, in five years
  • Next-gen XPUs expected to reach 100 Tbps HBM bandwidth per device
  • Networking must leap from 100 Gbps CPU I/O to >10 Tbps XPU I/O
  • Ethernet delivers scalability with open IEEE standards and UEC enhancements
  • Scale-up (in-rack) and scale-out (across racks/data centers) rely on Ethernet fabrics
  • Proprietary interconnects impose vertical lock-in and cannot match ecosystem breadth

“Ethernet will play a very, very important role in what we view as democratization of accelerators,” Velaga concluded, pointing to Broadcom’s roadmap of ultra-high-bandwidth, low-latency switches optimized for AI.

🌐 Analysis: Broadcom is betting that Ethernet’s openness and scale will win against InfiniBand in the race to build AI supercomputers. With Tomahawk6 and Tomahawk Ultra for scale out and scale up and Jericho for inter-DC fabrics, Broadcom is positioning Ethernet as the de facto transport for distributed AI clusters. Hyperscalers’ growing preference for Ethernet-based AI networking suggests Broadcom’s strategy is in step with industry momentum.

🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/

Tags: Broadcom
ShareTweetShare
Previous Post

AI Infrastructure Summit: Lightmatter’s Photonic Roadmap 

Next Post

MACOM Extends PCIe 6.0 to 100 Meters with New Optical Chipset

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Broadcom’s networking revenue up 20%, VMware deal on track
Semiconductors

Broadcom Launches Quantum-Safe Gen 8 128G SAN Switch Portfolio

November 19, 2025
PECC Summit: Broadcom’s Near Margalit on CPO Evolution
All

PECC Summit: Broadcom’s Near Margalit on CPO Evolution

October 23, 2025
OCP Summit Keynote: Broadcom’s Ram Velaga Outlines ESUN
All

OCP Summit Keynote: Broadcom’s Ram Velaga Outlines ESUN

October 16, 2025
Applied Materials and GlobalFoundries Launch Waveguide Facility 
AI Infrastructure

OpenAI Deepens Chip Ambitions with Broadcom Deal on Custom Accelerators

October 13, 2025
Broadcom Unveils 102.4 Tbps “Davisson” CPO Switch for AI Clusters
All

Broadcom Unveils 102.4 Tbps “Davisson” CPO Switch for AI Clusters

October 8, 2025
Video: Inside Broadcom’s 102.4 Tbps “Davisson” Switch
Video

Video: Inside Broadcom’s 102.4 Tbps “Davisson” Switch

October 8, 2025
Next Post
MACOM posts revenue of $178.1 million, up 14.8% yoy

MACOM Extends PCIe 6.0 to 100 Meters with New Optical Chipset

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version