• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » HOTI25: Nubis Sees Linear Optics and CPX Critical for AI Cluster Scaling

HOTI25: Nubis Sees Linear Optics and CPX Critical for AI Cluster Scaling

August 20, 2025
in AI Infrastructure, Optical
A A

At today’s Hot Interconnects online event, Nubis Communications CTO Peter Winzer warned that AI and HPC clusters are increasingly I/O bound, with interconnect performance unable to keep pace with compute. Winzer, a former Bell Labs researcher with over 100 patents and 500 papers, introduced Co-Packaged Optics Extensions (CPX) as a pragmatic path to bring co-packaged optics into mainstream deployment.

Winzer outlined the widening gap: switch capacity is growing 40% per year, but SerDes I/O per lane only 20% per year. The result is GPUs starved for bandwidth, even as AI cluster sizes surge to hundreds of thousands of accelerators. “Every AI system could use several orders of magnitude more I/O,” Winzer said, framing the issue with a roofline model that showed systems increasingly limited by data movement rather than compute.

Nubis proposes CPX as a connectorized co-packaged copper-and-optics solution that extends the proven pluggable ecosystem inside the package. Each CPX module delivers 6.4 Tbps at 5 pJ/bit, compatible with existing CPC sockets, and supports multi-vendor interoperability. By relying on retimer-free linear optics, CPX avoids the 30W-per-module power hit of DSP-based retimed optics, enabling hyperscale operators to cut interconnect power by more than 1 GW in 600,000-GPU clusters. Winzer also stressed that single-wavelength I/O with fiber shuffles outperforms WDM for AI: lower loss, lower cost, and simpler full fan-out connectivity.

  • Switch capacity grows 40%/year vs SerDes I/O at 20%/year → bandwidth starvation
  • Retimer-free linear optics reduce interconnect power from 1.7 GW to 600 MW in large clusters
  • CPX delivers 6.4 Tbps per connector at 5 pJ/bit, aligning with SerDes I/O roadmaps
  • Connectorized co-packaging supports multi-vendor interoperability
  • Single-wavelength I/O with fiber shuffles preferred over WDM for AI workloads

“Our CPX paradigm brings the proven pluggable optics ecosystem inside the package, while maintaining interoperability and low power,” said Peter Winzer, CTO of Nubis Communications.

🌐 Analysis: Nubis is carving out a pragmatic middle path between copper and full photonic interposers. By emphasizing linear optics, CPX aligns with existing SerDes roadmaps while sidestepping the power and cost penalties of retimers. With hyperscalers racing to deploy gigawatt-scale AI clusters, operators may favor CPX’s incremental approach over riskier, all-optical bets. Competitors like Lightmatter and Ayar Labs push more radical 3D photonic designs, but Nubis’ strategy may resonate with operators demanding compatibility and proven deployment models.

Tags: HOTINubis
ShareTweetShare
Previous Post

Vantage Data Centers Commits $25B to 1.4GW Campus in Texas

Next Post

HOTI25: Cisco on Managing AI Networking Complexity

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Ciena to Acquire Nubis for $270M, Expands AI Data Center Interconnect Portfolio
Optical

Ciena to Acquire Nubis for $270M, Expands AI Data Center Interconnect Portfolio

September 22, 2025
Hot Interconnects: UALink for Rack-Scale AI Interconnects
All

Hot Interconnects: UALink for Rack-Scale AI Interconnects

August 25, 2025
Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency
AI Infrastructure

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency

August 25, 2025
Hot Interconnects: Celestial AI’s Photonic Fabric
AI Infrastructure

Hot Interconnects: Celestial AI’s Photonic Fabric

August 25, 2025
Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters
AI Infrastructure

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

August 22, 2025
Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI
AI Infrastructure

Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI

August 22, 2025
Next Post
HOTI25: Cisco on Managing AI Networking Complexity

HOTI25: Cisco on Managing AI Networking Complexity

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version