• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Marvell Intros Co-Packaged Optics for Custom AI Accelerators

Marvell Intros Co-Packaged Optics for Custom AI Accelerators

January 6, 2025
in Semiconductors
A A

Marvell introduced a custom AI accelerator (XPU) architecture featuring integrated Co-Packaged Optics (CPO) to enhance AI server performance. The new design supports higher bandwidth density and longer reach connections for AI server scaling, increasing XPU density from tens per rack with copper interconnects to hundreds across multiple racks using CPO. This technology offers improved data transfer rates and reduced latency, supporting next-generation AI infrastructure for cloud hyperscalers.

The custom AI accelerator architecture integrates Marvell’s 3D SiPho Engines with XPU compute silicon, high-bandwidth memory (HBM), and other chiplets on a single substrate. It uses high-speed SerDes, die-to-die interfaces, and advanced packaging to eliminate the need for copper cabling. Marvell’s CPO technology enables data transfer rates up to 100 times longer than traditional electrical cabling with enhanced power efficiency and minimal latency. The 6.4Tbps 3D SiPho Engine, which supports 200Gbps electrical and optical interfaces across 32 channels, offers twice the bandwidth and density while reducing power consumption per bit by 30%.

Marvell notes that its silicon photonics technology, deployed in its COLORZ data center interconnect modules for over eight years, has recorded more than 10 billion field hours. The company continues to expand its portfolio, including SerDes and die-to-die IP for custom XPUs, PCIe retimers, and a range of optical DSPs for data center interconnect applications. Multiple customers are evaluating the CPO technology for next-generation AI systems.

• Custom XPU design with Co-Packaged Optics (CPO) for AI servers

• XPU density increases from tens per rack to hundreds across multiple racks

• Integrated 3D SiPho Engine supports 6.4Tbps with 200Gbps electrical and optical interfaces

• Reduces power consumption per bit by 30% compared to 100G interfaces

• Silicon photonics technology field-tested for over 10 billion hours

“AI scale-up servers require connectivity with higher signaling speeds and longer distances to support unprecedented XPU cluster sizes,” said Nick Kucharewski, senior vice president and general manager of the Network Switching Business Unit at Marvell. “Integrating co-packaged optics into custom XPUs is the logical next step to scale performance with higher interconnect bandwidths and longer reach.”

“Silicon photonics is vital for scaling accelerated infrastructure connectivity to address increasing bandwidth demands, interconnect distances, power consumption, and total cost of ownership,” said Radha Nagarajan, senior vice president and chief technology officer of Optical Platforms at Marvell. “Since 2017, Marvell has pioneered the delivery of high-volume silicon photonics devices to top hyperscalers and leveraged this expertise to create a cutting-edge CPO architecture for the killer CPO use case of custom XPU connectivity.”

Source: Marvell
Tags: CPOMarvell
ShareTweetShare
Previous Post

Arizona State University to Host New CHIPS R&D Facility in Tempe

Next Post

Ionstream.ai Chooses Juniper Networks for AI Data Center Infrastructure

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Applied Materials and GlobalFoundries Launch Waveguide Facility 
Optical

Tower Extends Wafer-Bonding Tech to SiPho + SiGe for Data-Center Optics

November 12, 2025
Marvell Adds Active Copper Cable Equalizers
All

Marvell Adds Active Copper Cable Equalizers

October 14, 2025
Broadcom Unveils 102.4 Tbps “Davisson” CPO Switch for AI Clusters
All

Broadcom Unveils 102.4 Tbps “Davisson” CPO Switch for AI Clusters

October 8, 2025
Video: Inside Broadcom’s 102.4 Tbps “Davisson” Switch
Video

Video: Inside Broadcom’s 102.4 Tbps “Davisson” Switch

October 8, 2025
Alchip and Ayar Labs Team on Co-Packaged Optics
All

Alchip and Ayar Labs Team on Co-Packaged Optics

September 26, 2025
ECOC25: Coherent Samples 400 mW CW Lasers for CPO
Optical

ECOC25: Coherent Samples 400 mW CW Lasers for CPO

September 26, 2025
Next Post
Virtual-Q deploys Juniper’s Apstra data center solutions

Ionstream.ai Chooses Juniper Networks for AI Data Center Infrastructure

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version