The AI Infrastructure Summit 2025 in Santa Clara gathered leaders across compute, networking, and semiconductors to tackle the core challenge of our era: scaling infrastructure for AI. Alongside our on-camera interviews, we reported on major moves from Google, Meta, AWS, and Cerebras — underscoring shared themes around bandwidth density, photonic/electrical interconnects, open fabrics, and energy-efficient architectures.
Watch the full series on our YouTube channel: @NextGenInfra
Dudy Cohen — DriveNets

Dudy Cohen explains how DriveNets is rethinking the design of AI data center fabrics by borrowing principles from cloud-native architectures. Instead of rigid, chassis-based systems, DriveNets promotes disaggregation and scalability across clusters of white-box switches. Cohen highlights how this approach allows operators to scale up rapidly and manage networks with software-driven flexibility. He argues that as AI workloads explode, the industry must adopt networking models that mirror the agility of hyperscalers.
Lisa Spelman — Cornelis Networks
Lisa Spelman, CEO of Cornelis Networks, discusses how her company is addressing the need for high-performance fabrics in AI and HPC environments. Building on its Intel heritage, Cornelis is positioning itself as a key competitor to InfiniBand in large-scale clusters. Spelman emphasizes the importance of scalability, low latency, and energy efficiency as models grow ever larger. She also stresses the value of choice in interconnect technologies, arguing that healthy competition will accelerate innovation across the ecosystem.
Vishal Shukla — Aviz Networks
Vishal Shukla explains how Aviz Networks is enabling open networking at the heart of AI infrastructure. By leveraging SONiC and supporting multi-vendor fabrics, Aviz gives operators the ability to avoid lock-in and optimize networks for their specific needs. Shukla highlights how AI training demands are pushing operators toward faster integration cycles and more adaptable architectures. He believes that open networking software will be a decisive factor in lowering costs and improving flexibility in hyperscale environments.
Sid Sheth — d-Matrix
Sid Sheth provides an update on d-Matrix’s chiplet-based inference solutions, including its Corsair and Jetstream platforms. He describes how these architectures achieve greater performance-per-watt compared to traditional GPU-based approaches. Sheth also emphasizes the economic advantages of their solution, which reduces both power consumption and hardware footprint. For enterprises scaling AI workloads, d-Matrix aims to deliver efficiency and performance that challenge incumbent solutions.
Ram Velaga — Broadcom
Ram Velaga outlines Broadcom’s roadmap for Ethernet in AI infrastructure, emphasizing its scalability and ecosystem support. He points to advances in 51.2T switching, large radix systems, and energy-efficient designs as proof that Ethernet can meet the demands of AI clusters. Velaga also addresses the industry debate over Ethernet versus InfiniBand, noting that Ethernet’s openness and rapid innovation cycles provide significant long-term advantages. His remarks reinforce Broadcom’s central role in shaping the backbone of AI data centers.
Steve Klinger — Lightmatter
Steve Klinger introduces Lightmatter’s Passage photonic interposer, which uses silicon photonics to connect chiplets at unprecedented speeds. He explains how this optical approach reduces latency and increases memory bandwidth density beyond what is possible with electrical interconnects. Klinger underscores that as AI models expand, conventional scaling techniques are insufficient without photonics. Lightmatter’s innovations highlight how optical fabrics are poised to become essential for next-generation compute systems.
John Simpson — SiFive
John Simpson explains how SiFive is advancing RISC-V technology for AI applications. By offering open and customizable instruction sets, SiFive enables developers to tailor processors specifically for AI workloads. Simpson highlights how this flexibility reduces costs and accelerates innovation across the semiconductor ecosystem. His remarks underscore the growing role of RISC-V in reshaping the economics of AI compute.
Vladimir Stojanovic — Ayar Labs
Vladimir Stojanovic discusses Ayar Labs’ progress in optical I/O and co-packaged optics. He explains how moving data with photons instead of electrons eliminates key bandwidth and power bottlenecks. Stojanovic also touches on Ayar’s partnerships with major chipmakers to bring optical I/O into commercial AI systems. His vision illustrates why photonic technologies are critical to the long-term scalability of data centers.
Kurtis Bowman — UALink Consortium
Kurtis Bowman introduces the UALink Consortium, a new industry initiative to develop an open, high-speed interconnect for AI accelerators. He explains how the founding members are collaborating to ensure interoperability and prevent vendor lock-in. Bowman highlights milestones achieved to date and the roadmap for standardization. His comments reflect a broader industry push toward openness and shared innovation in AI hardware ecosystems.
Moshe Tanach — NeuReality
Moshe Tanach describes how NeuReality is tackling inefficiencies in AI inference by rethinking the system architecture. Instead of focusing only on compute engines, NeuReality offloads non-compute bottlenecks that slow down performance. Tanach argues this approach dramatically reduces latency and cost for enterprise AI deployments. His perspective shows how holistic system design can accelerate AI adoption in real-world data centers.
Vivek Raghunathan — Xscape Photonics
Vivek Raghunathan introduces Xscape Photonics’ work in integrating multi-wavelength lasers directly onto silicon photonics. This breakthrough enables higher-bandwidth optical interconnects for hyperscale training clusters. Raghunathan explains how their approach improves efficiency and lowers system cost compared to discrete laser assemblies. His insights highlight how optical integration is unlocking the next phase of AI infrastructure.
Mark Kuemerle — Marvell (Segment 1)
Mark Kuemerle discusses Marvell’s custom ASIC portfolio and how advanced packaging techniques are reshaping AI silicon. He highlights a new die-to-die interconnect IP block that delivers over three times the bandwidth density of standards-based solutions. Kuemerle emphasizes how Marvell’s expertise enables hyperscalers to design silicon tailored for their unique AI workloads. His remarks illustrate the growing importance of co-design between silicon and packaging in meeting performance demands.
Mark Kuemerle — Marvell (Segment 2)
In a deeper dive, Kuemerle elaborates on the unique die-to-die interface Marvell has developed for next-generation AI systems. He explains how the design delivers both higher bandwidth density and significantly lower power consumption. Kuemerle notes that this technology is already enabling customers to push the boundaries of data-center-scale AI compute. His comments reinforce Marvell’s position at the forefront of custom silicon innovation.
Marc Austin — Hedgehog
Marc Austin explains how Hedgehog is simplifying AI infrastructure deployment with open and automated networking solutions. He emphasizes the importance of making hyperscale-grade capabilities accessible to smaller organizations and enterprises. Austin describes how Hedgehog’s platform reduces complexity, accelerates rollout, and lowers the barrier to entry for advanced AI deployments. His perspective highlights the democratization of infrastructure as a key enabler of broader AI adoption.
📣 Participate in our next series: We’re curating the Data Center Networking for AI & Cloud Workloads 2025 video showcase starting in November. If your company is advancing AI fabrics, photonics, interconnects, or power/cooling for AI data centers, contact us to be featured on NextGenInfra.io.
