Baya Systems, a start-up based in Santa Clara, California, introduced NeuraScale, a new switch fabric technology designed to address critical scalability and data movement challenges in AI infrastructure. Traditional crossbar switching architectures struggle to keep pace with increasing AI workload demands, limiting performance and node density. NeuraScale provides a non-blocking, high-throughput alternative, enabling a 100x increase in node density and scale compared to current solutions. The technology is built to support next-generation AI systems using high-density switching and modular chiplet-based designs.
NeuraScale supports 256 ports per chiplet with 1 terabit per second (Tbps) throughput, operating at over 2 GHz in 4nm technology with less than 20 nanoseconds of latency. Its fully modular architecture simplifies integration for AI system-on-chips (SoCs) and chiplet-based designs, ensuring seamless scaling across multiple processors. Designed for industry-standard protocols, NeuraScale is compatible with AMBA, UALink, UCIe, and Ultra Ethernet, aligning with emerging AI interconnect standards. The switch fabric is complemented by Baya Systems’ WeaverPro, a software-driven platform for design, analysis, and optimization, reducing development cycles and simplifying large-scale interconnect implementation.
The UALink Consortium, which is working to create an open ecosystem for large-scale AI acceleration, welcomed Baya Systems as a key innovator in AI scaling. NeuraScale is already in use by leading partners developing next-generation scale-up and scale-out solutions, with broader availability expected in Q2 2025. The technology will be showcased at industry events, including upcoming AI and high-performance computing (HPC) conferences.
• Baya Systems introduces NeuraScale™, a switch fabric for AI scale-up and scale-out
• Enables 100x increase in node density and scale for next-generation AI infrastructure
• Supports 256 ports per chiplet with 1 Tbps throughput and sub-20 ns latency
• Modular design simplifies implementation across AI SoCs and chiplets
• Designed for UALink, UCIe, and Ultra Ethernet compliance
• Integrated with WeaverPro software for AI interconnect optimization
• Available to ecosystem partners in Q2 2025
“NeuraScale empowers a radical growth in node density, with highly energy-efficient and area-efficient switching solutions for next-gen AI – the key challenges to scale,” said Dr. Sailesh Kumar, CEO of Baya Systems.
How is data movement becoming a critical bottleneck in AI compute architectures?
Sailesh Kumar, CEO from Baya Systems, explains:
– Data movement between compute, I/O, memory, and caches is emerging as a fundamental challenge in scaling AI systems efficiently
– On-chip network protocols like U-link and ARM standards are becoming essential for enabling compute elements to communicate effectively
– Their chiplet-aware solution creates unique fabric architectures that optimize data movement while maintaining low power and silicon costs

Want to be involved our video series? Contact info@nextgeninfra.io
Check out full showcase at: https://ngi.fyi/25DCNetworkAIyt to learn more about data center networking for AI and cloud workloads







