• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » PECC Summit: NVIDIA’s Ashkan Seyedi on AI Networking

PECC Summit: NVIDIA’s Ashkan Seyedi on AI Networking

October 23, 2025
in All
A A

Speaking at the Photonic Enabled Cloud Computing (PECC) Summit, co-hosted by Optica and the Advanced Photonics Coalition at the HPE + Juniper Networks Aspiration Dome, Ashkan Seyedi, Senior Director at NVIDIA, presented a detailed look into how network architecture defines the scalability, performance, and economics of artificial intelligence infrastructure.

  • AI inference revenue depends directly on moving data efficiently across GPUs.
  • NVIDIA’s Ethernet-based AI networking delivers up to 35% higher performance than standard implementations.
  • Spectrum-X and SuperNIC platforms reduce jitter, latency, and congestion across hyperscale clusters.
  • Co-packaged optics (CPO) integration cuts power and improves system reliability.
  • Balance between scale-up and scale-out bandwidth is essential to maintain power and thermal efficiency.
  • NVIDIA’s AI factories now scale toward 500,000 GPUs per campus, drawing over 600 megawatts of compute power.

Networking as a Source of Revenue

Seyedi reframed networking as the true engine of AI revenue. “KV caches don’t make money when they sit still—they make money when they move,” he said. In NVIDIA’s large-scale inference deployments, the movement of model data between accelerators determines utilization and ultimately profitability.

He emphasized that AI networks are not merely plumbing for compute but are now tied directly to revenue generation and total cost of ownership. Every transfer of cache between GPUs represents a productive cycle, much like a financial transaction in motion.

Ethernet Tuned for AI

NVIDIA’s next-generation Ethernet architecture, derived from InfiniBand principles, has been deployed by Meta, Oracle, and xAI to support massive AI training clusters. These systems feature congestion control, traffic shaping, and flow isolation tailored to GPU workloads.

The company reports 30–35% better performance for multi-tenant AI workloads, with reduced jitter and improved latency consistency. Spectrum-X switches and SuperNICs work as a tightly integrated system to maintain deterministic performance across thousands of nodes.

“Just because you call it Ethernet doesn’t mean it behaves like Ethernet,” Seyedi said. “If it only works with one vendor’s NICs, switches, and software, that’s not really open.”

Balancing Scale-Up and Scale-Out

Seyedi discussed the need to maintain balance between scale-up and scale-out network bandwidths. GPUs must manage three interconnected fabrics—HBM on-package memory, NVLink for intra-node communication, and Ethernet or InfiniBand for cluster-scale communication.

“If your interconnect isn’t ten times lower in power and smaller in size, it’s going to be a deployment headache at AI scale,” he said. Even small inefficiencies multiply across hundreds of thousands of GPUs.

Co-Packaged Optics: From Concept to Production

To meet power and reliability targets at hyperscale, NVIDIA is deploying co-packaged optics across both InfiniBand and Ethernet lines. By placing optical engines adjacent to the switch ASIC, NVIDIA removes several electrical conversion steps, reducing loss, voltage regulation overhead, and failure points.

“The best component in your system is the one you don’t use,” Seyedi said, noting that CPO designs simplify thermal management while increasing bandwidth density. Each module integrates lasers, drivers, and photonics in a compact package to support multi-terabit connectivity per switch.

Advances in Photonics

Seyedi noted renewed industry confidence in microwave photonics and micro-ring modulators, technologies that had once been considered too unstable for large-scale deployment. “We made microrings cool again,” he joked.

He also addressed the growing interest in micro-LEDs for photonic interconnects but urged caution. “The LED might be perfect—zero picojoules per bit—but if you can’t connect it, it’s useless.” He reminded the audience that the connector and fiber ecosystem, built over decades of Telco standardization, remains the economic foundation of optical networking.

Design Realism and System Thinking

Seyedi cautioned against unrealistic assumptions in chip and package design. “You don’t put the kitchen at the front door,” he said, referring to impractical claims of placing I/O anywhere on a die. Practical layout constraints, thermal budgets, and manufacturability still govern system architecture.

He urged photonics and networking startups to evaluate innovation at the system level rather than in isolation. “Keep your eye on how the solution scales—thermally, electrically, and economically. That’s where innovation really counts.”

Tags: NvidiaPECC25
ShareTweetShare
Previous Post

PECC Summit: Broadcom’s Near Margalit on CPO Evolution

Next Post

Axelera Launches Europa AIPU with 629 TOPs 

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

OCP Expands AI Initiative with Contributions from NVIDIA and Meta
Semiconductors

Arm Extends Neoverse With NVIDIA NVLink Fusion

November 17, 2025
Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud
AI Infrastructure

Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud

November 6, 2025
Scintil Photonics Pushes DWDM Co-Packaged Optics
Optical

Scintil Photonics Pushes DWDM Co-Packaged Optics

November 5, 2025
Arista’s Andy Bechtolsheim: Pluggables Still Reign as AI Drives Next Wave of 1.6T and 3.2T
Optical

Arista’s Andy Bechtolsheim: Pluggables Still Reign as AI Drives Next Wave of 1.6T and 3.2T

November 4, 2025
Forescout Unveils Real-Time Detection Tech for Non-Quantum-Safe Encryption
Quantum

NVQLink: NVIDIA’s Bridge to Quantum Supercomputing

November 1, 2025
NVIDIA Fuels Korea’s AI Factory Boom
AI Infrastructure

NVIDIA Fuels Korea’s AI Factory Boom

November 1, 2025
Next Post
Axelera Launches Europa AIPU with 629 TOPs 

Axelera Launches Europa AIPU with 629 TOPs 

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version