• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Sunday, April 12, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Credo Targets AI Memory Bottlenecks with Weaver Fanout Gearbox

Credo Targets AI Memory Bottlenecks with Weaver Fanout Gearbox

November 3, 2025
in Semiconductors
A A

Credo Technology Group has launched Weaver, a memory fanout gearbox engineered to break through the performance barriers of AI inference workloads. Weaver is the first product in Credo’s new OmniConnectfamily, addressing the growing need for scalable, energy-efficient memory architectures in data centers. The solution targets xPUs and AI accelerators constrained by traditional LPDDR5X and GDDR memory systems, delivering up to 6.4TB of memory capacity and 16TB/s of bandwidth using advanced 112G very short reach (VSR) SerDes technology.

AI inference workloads are increasingly limited by memory density and throughput rather than compute capability. Weaver’s fanout gearbox architecture delivers 10x higher I/O density than conventional designs, combining high performance with lower cost and energy efficiency compared to HBM-based systems. It also supports flexible DRAM packaging and late binding, enabling AI system builders to tune configurations as model requirements evolve. Designed for long-term compatibility, Weaver supports migration to next-generation memory protocols and includes telemetry and diagnostics for enhanced reliability.

Weaver is available for design-in now, with full product availability expected in the second half of 2026. The OmniConnect 112G VSR interface enables seamless integration into high-bandwidth AI infrastructure. Credo will detail the new technology during a November 10 webinar titled “Breaking the Memory Wall: Scaling AI Inference with Innovative Memory Fanout Architecture.”

  • Weaver enables up to 6.4TB of LPDDR5X memory and 16TB/s bandwidth
  • Leverages 112G VSR SerDes for up to 10x higher I/O density
  • Overcomes limitations of HBM cost, power, and supply constraints
  • Supports flexible DRAM packaging and telemetry for reliability
  • General availability expected in 2H 2026

“Weaver is designed to deliver the flexibility and scalability required for future AI inference systems,” said Don Barnetson, Senior Vice President of Product at Credo. “This innovation empowers our partners to optimize memory provisioning, reduce costs, and accelerate deployment of advanced AI workloads.”

🌐 Analysis: Credo’s Weaver marks a significant expansion from its Ethernet and SerDes roots into the AI memory subsystem—one of the biggest bottlenecks in inference scaling. As hyperscalers push toward ever-larger models, optimizing DRAM-based architectures could offer a lower-cost and more power-efficient alternative to HBM. Credo’s move puts it in competition with emerging players like Eliyan and Astera Labs that are also tackling AI memory bandwidth challenges. If OmniConnect delivers as promised, Credo could become a central player in the next generation of AI system design.

🌐 We’re tracking the latest developments in networking silicon. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/Attachment.tiff

Tags: Credo
ShareTweetShare
Previous Post

Nokia and Rohde & Schwarz Unveil AI-Powered 6G Receiver

Next Post

IREN Secures $9.7B AI Cloud Contract with Microsoft

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Credo Unveils ZeroFlap Optical Transceivers
Optical

Credo Unveils ZeroFlap Optical Transceivers

October 13, 2025
Credo Acquires Hyperlume for MicroLED Optical Interconnect 
Optical

Credo Acquires Hyperlume for MicroLED Optical Interconnect 

October 1, 2025
Credo Launches 224G PAM4 SerDes IP on TSMC N3
Optical

Credo Launches 224G PAM4 SerDes IP on TSMC N3

September 25, 2025
ECOC25: Credo to Demo 1.6T and 800G Optical DSPs a
Optical

ECOC25: Credo to Demo 1.6T and 800G Optical DSPs a

September 23, 2025
Credo unveils low-power Lark DSPs for 800G optical interconnects
Financials

Credo Raises Guidance After Record Q1; Hyperscalers Drive Outlook

September 3, 2025
CBRE Group acquires Direct Line Global for data center infrastructure
Legal / Regulatory

Credo and Amphenol Settle Active Electrical Cable Patent Dispute

August 14, 2025
Next Post
Plans for a New 1,600-Mile Fiber Backbone Across Texas

IREN Secures $9.7B AI Cloud Contract with Microsoft

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version