Edgecore Networks introduced its Nexvec solution for Enterprise AI, combining disaggregated networking with software-defined composable infrastructure to address the performance and resource challenges of AI workloads. Targeted at inference, agentic AI, and reasoning tasks, Nexvec enables GPU and memory pooling using PCIe and CXL fabrics, reducing the power and space overhead typically required by high-end server clusters. The solution supports orchestration frameworks such as VMware, SLURM, and Kubernetes.
Nexvec integrates Liqid’s Matrix software to dynamically allocate GPUs, memory, and storage across workloads, while leveraging Edgecore’s open Ethernet switch portfolio. The solution supports both scale-up and scale-out AI architectures with Broadcom Tomahawk and Jericho chipsets and includes the new Nous fabric controller for full lifecycle automation. Edgecore also emphasized its support for SONiC and third-party NOS integration through a certification program.
Limited availability for Nexvec begins immediately, with general availability expected by year-end. Edgecore’s approach builds on its long-standing open networking strategy while expanding into full-stack AI infrastructure for enterprise environments.
Key Points:
- Composable compute powered by Liqid with GPU, memory, and storage resource pooling
- PCIe and CXL-based fabric supports dynamic multi-tenant allocation
- Integrated orchestration with VMware, Kubernetes, and SLURM
- Ethernet AI fabrics using Broadcom Tomahawk, Trident, and Jericho platforms
- Includes Nous fabric controller for Day 0–Day 2 lifecycle management
- SONiC-first strategy with third-party NOS certification program
- General availability targeted by end of 2025
“We’re moving beyond networking to deliver a full-stack solution, integrating disaggregated networking and composable compute to simplify Enterprise AI adoption,” said Jun Shi, CEO of Accton Technology Group.







