• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » HOTI25: Cornelis Presents its 576-Port Director Switch and Sub-µs Latency

HOTI25: Cornelis Presents its 576-Port Director Switch and Sub-µs Latency

August 20, 2025
in AI Infrastructure, Clouds and Carriers, Data Centers
A A

AI and HPC clusters are no longer limited by processors—they’re constrained by the network fabric tying them together. At today’s online Hot Interconnects, Cornelis Networks’ Field CTO Matt Williams argued that without faster, more efficient interconnects, the next generation of scientific computing and AI training workloads will stall. Cornelis is betting that its Omni-Path architecture, purpose-built for scale-out performance, can outpace InfiniBand with lower latency, higher message rates, and consistent bandwidth at massive scale.

Williams described Cornelis as the “inventors of Omni-Path,” originally developed at Intel and spun out into an independent company. Today, Cornelis ships an end-to-end solution: Omni-Path SuperNICs, custom-designed switches, and an open-source OPX libfabric software stack integrated into the Linux kernel. The hardware portfolio includes a 48-port top-of-rack switch and a director-class system scaling to 576 ports in just 17RU at half the power draw of comparable InfiniBand platforms. On the software side, Omni-Path seamlessly supports MPI, PyTorch, TensorFlow, CUDA, and ROCm, making it a drop-in fabric for both HPC and AI environments.

Cornelis highlighted architectural features that set Omni-Path apart: sub-microsecond MPI latency, 2.5x the message rate of InfiniBand NDR, and fine-grained adaptive routing (FGAR) that spreads traffic across multiple paths while sharing congestion telemetry between switches. Its link-level retry mechanism retransmits errored packets locally, avoiding application-level slowdowns caused by end-to-end retransmissions. In early benchmarks, Omni-Path delivered 23–34% lower latency and up to 45% higher performance on sensitive HPC workloads.

  • Custom SuperNICs and switches designed for HPC + AI
  • 576-port director switch in 17RU with 50% less power than rivals
  • Sub-µs MPI latency and up to 2.5x InfiniBand NDR message rate
  • FGAR ensures consistent bandwidth under congestion
  • Link-level retry prevents packet loss from bit errors

“We designed Omni-Path from the silicon up to deliver the lowest latency, highest message rate, and most efficient bandwidth utilization for HPC and AI applications,” said Matt Williams, Field CTO at Cornelis Networks.

🌐 Analysis: Cornelis is positioning Omni-Path as the fabric of choice for AI and HPC clusters that demand more than incremental gains. By controlling the silicon, system design, and software stack, Cornelis can optimize end-to-end performance in ways competitors like NVIDIA’s InfiniBand struggle to match. With cluster sizes expanding into hundreds of thousands of nodes, Cornelis’ strategy of low latency, high message rate, and local retry resilience could help HPC sites and hyperscalers reduce training times and energy costs.

Tags: CornelisHOTI
ShareTweetShare
Previous Post

HOTI25: Cisco on Managing AI Networking Complexity

Next Post

HOTI25:  Lightmatter’s Roadmap to 100x AI Bandwidth

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Blueprint Column: Building Scalable Federal AI/HPC Systems
All

Blueprint Column: Building Scalable Federal AI/HPC Systems

October 11, 2025
Hot Interconnects: UALink for Rack-Scale AI Interconnects
All

Hot Interconnects: UALink for Rack-Scale AI Interconnects

August 25, 2025
Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency
AI Infrastructure

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency

August 25, 2025
Hot Interconnects: Celestial AI’s Photonic Fabric
AI Infrastructure

Hot Interconnects: Celestial AI’s Photonic Fabric

August 25, 2025
Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters
AI Infrastructure

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

August 22, 2025
Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI
AI Infrastructure

Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI

August 22, 2025
Next Post
Lightmatter Shows its 114 Tbps Photonic Superchip and 64 Tbps CPO

HOTI25:  Lightmatter’s Roadmap to 100x AI Bandwidth

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version