• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Sunday, April 12, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Cornelis Debuts CN5000 with Lossless Fabric and Adaptive Routing for AI

Cornelis Debuts CN5000 with Lossless Fabric and Adaptive Routing for AI

June 4, 2025
in AI Infrastructure, All
A A

Cornelis Networks has introduced the CN5000 product family, a 400G end-to-end networking platform designed to address the performance bottlenecks in AI and high-performance computing (HPC) workloads. The launch marks a significant milestone for the company, which spun out of Intel’s Omni-Path division, and is now positioning itself as a key provider of intelligent fabric solutions for massively scaled GPU clusters.

At the heart of the CN5000 platform is the next-generation Omni-Path architecture, which delivers lossless data transmission and congestion avoidance using credit-based flow control and adaptive routing. According to Cornelis, the CN5000 outperforms comparable networks by delivering 2X greater message rates, 35% lower latency, and up to 30% higher HPC application performance compared to InfiniBand NDR. For AI environments, the CN5000 is optimized for collective operations, delivering 6X faster performance than RoCE-based systems.

“Networking should do more than just move data quickly — it should unlock the full potential of every compute cycle,” said Lisa Spelman, CEO of Cornelis Networks. “That’s the performance we are offering customers with the CN5000 — a new breed of network-led application acceleration for AI and HPC applications where our scale-out network becomes a force-multiplier for performance at any scale.”

Key Technical Features of CN5000:

  • 400G Scale-Out Fabric supporting deployments of up to 500,000 endpoints
  • SuperNICs: Single and dual-port options with air and liquid cooling
  • Switches: 48-port units and modular Director-class configurations with up to 576 ports
  • Congestion Management: Lossless transport with credit-based flow control and adaptive routing
  • OPX Software Suite: Built on open-source frameworks for vendor-neutral deployment
  • Universal Interoperability: Supports AMD, Intel, NVIDIA GPUs and CPUs

Looking ahead, Cornelis is already preparing the CN6000 series, an 800G offering that will integrate Omni-Path capabilities with RoCE-enabled Ethernet to target broader cloud and enterprise markets. The company also plans to launch the CN7000 (1.6T) platform aligned with Ultra Ethernet Consortium standards to meet the escalating demands of AI and exascale systems.

ShareTweetShare
Previous Post

Video: How Cornelis Networks Is Rewiring AI Infrastructure

Next Post

Cologix and Lambda Activate NVIDIA HGX B200 AI Clusters

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
NVIDIA debuts HGX H200 Tensor Core GPU

Cologix and Lambda Activate NVIDIA HGX B200 AI Clusters

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version