Edgecore Networks is previewing a portfolio of open infrastructure solutions aimed at scaling AI workloads, featuring next-generation data center switches, Co-Packaged Optics (CPO), and turnkey AI compute platforms. The company’s demonstrations center on high-performance switching and composable compute architectures that simplify large-scale AI cluster deployments across hyperscale and enterprise environments.
At the core of Edgecore’s showcase is a new family of Broadcom Tomahawk 6-based switches delivering over 100 Tbps throughput. These platforms incorporate Cognitive Routing and a flexible two-tier scale-up and scale-out architecture for low-latency AI fabrics. Edgecore’s early Co-Packaged Optics proof-of-concept integrates optical engines directly with switch silicon, a move expected to reduce power consumption and improve density for future 1.6T AI data center designs.
The company is also expanding its Nexvec™ turnkey AI infrastructure platform, combining open-networking switches with GPU-based compute nodes. Its Nous infrastructure controller now supports dynamic provisioning of GPUs and memory resources, enabling composable compute across AI clusters. Together, Nexvec and Nous aim to accelerate deployment from Day-0 setup to Day-2 operations while maximizing efficiency and scalability.
• Tomahawk 6 switches deliver >100 Tbps bandwidth with Cognitive Routing and AI-optimized scale-out design
• CPO proof-of-concept demonstrates optical integration with switch silicon for power and density gains
• Nexvec™ platform offers pre-validated AI networking and compute infrastructure
• Nous controller introduces composable compute management across GPU and memory pools
“Our goal is to simplify how AI infrastructure is built and scaled,” said Mingshou Liu, President at Edgecore Networks. “By combining open hardware with intelligent control software, we’re enabling faster and more efficient AI deployment at every layer of the stack.”
🌐 Analysis: Edgecore’s focus on open, disaggregated AI infrastructure underscores the growing convergence between network silicon, optics, and compute orchestration. By integrating Tomahawk 6-based switching and advancing CPO research, Edgecore aligns closely with the Open Compute Project’s direction toward open, energy-efficient architectures. This also places the company in strategic competition with vendors such as Arista, Dell, and Celestica pursuing similar AI cluster reference designs for hyperscalers.
