• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Cerebras + Ranovus = Wafer-Sale Compute + Co-Packaged Optics

Cerebras + Ranovus = Wafer-Sale Compute + Co-Packaged Optics

April 1, 2025
in All, Semiconductors
A A

Cerebras Systems has secured a new contract from DARPA to develop a next-generation real-time compute platform combining its wafer-scale engine with Ranovus’ co-packaged optics. The system is designed to deliver significantly higher performance for AI and high-performance computing (HPC) workloads, addressing longstanding bottlenecks in memory and interconnect bandwidth while reducing power consumption.

The platform will integrate Cerebras’ Wafer-Scale Engine—which offers 7,000 times more memory bandwidth than GPUs—with Ranovus’ wafer-scale photonic interconnects to overcome communication constraints that hinder conventional systems. This architecture targets real-time simulations and AI inference tasks at a scale and speed not previously possible. It builds upon Cerebras’ experience with DARPA’s Digital RF Battlespace Emulator (DRBE) and introduces a new class of compute optimized for energy efficiency and latency-sensitive environments.

Beyond defense applications, the Cerebras-Ranovus system has dual-use potential for commercial sectors such as robotics, real-time sensor analytics, and complex digital twin simulations. The joint solution is expected to outperform today’s supercomputing clusters while consuming only a fraction of the power, setting a new bar for AI infrastructure across both public and private sectors.

• DARPA selected Cerebras to build a real-time AI/HPC platform using wafer-scale compute and co-packaged optics.

• Cerebras’ Wafer-Scale Engine delivers 7,000x GPU memory bandwidth, enabling extreme-scale inference and simulation.

• Ranovus’ co-packaged optics interconnect offers a 100x improvement in data capacity over current solutions.

• Integrated system reduces power draw compared to traditional GPU clusters using discrete switches and optics.

• Applications include real-time battlefield simulations, AI sensor processing, and advanced commercial robotics.

“By solving these fundamental problems of compute bandwidth, communication IO and power per unit compute through Cerebras’ wafer scale technology plus optical integration with Ranovus, we will unlock solutions to some of the most complex problems in the realm of real-time AI and physical simulations,” said Andrew Feldman, co-founder and CEO of Cerebras.

Tags: CerebrasOFC25Ranovus
ShareTweetShare
Previous Post

Open ROADM MSA Demos Multi-Vendor 800G Pluggables

Next Post

Semtech Pushes Toward 3.2T With 400G-Per-Channel PMD Work

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Ranovus Expands Optical Chip Manufacturing in Canada
Optical

Ranovus Expands Optical Chip Manufacturing in Canada

August 21, 2025
Drut and Ranovus Team Up on CPO-Based PCIe Photonic Interconnects
Optical

Drut and Ranovus Team Up on CPO-Based PCIe Photonic Interconnects

May 29, 2025
#OFC25 Video: Next-Gen 224G & 448G/Lane Electrical Connectivity
Video

Wafer-Scale CPO + Wafer-Scale AI Computing

April 30, 2025
#OFC25 Video: Next-Gen 224G & 448G/Lane Electrical Connectivity
Video

Wafer-Scale CPO + Wafer-Scale AI Computing

April 27, 2025
Optical

#OFC25 video: Wafer-Scale CPO + Wafer-Scale AI Computing

April 14, 2025
Aloe Semiconductor Pushes the Envelope with 850G DP-BiDi and 160-Gbaud SiPh Modulators
Optical

Aloe Semiconductor Pushes the Envelope with 850G DP-BiDi and 160-Gbaud SiPh Modulators

April 11, 2025
Next Post
ECOC24: Polariton Achieves 400 Gbps Per Lane

Semtech Pushes Toward 3.2T With 400G-Per-Channel PMD Work

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version