• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Sunday, April 12, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » d-Matrix Debuts Corsair AI Platform with 150 Tbps Bandwidth for AI Inference

d-Matrix Debuts Corsair AI Platform with 150 Tbps Bandwidth for AI Inference

November 19, 2024
in Semiconductors, Start-ups
A A

d-Matrix, a start-up based in Santa Clara, California, launched “Corsair”, a compute platform designed specifically for AI inference in modern datacenters. Built on d-Matrix’s proprietary Digital In-Memory Compute (DIMC) architecture, Corsair integrates memory and compute for high-performance generative AI applications, delivering faster token generation speeds, improved energy efficiency, and lower total cost of ownership compared to GPUs and other systems. This innovation addresses growing enterprise demand for scalable, cost-effective AI infrastructure.

Corsair supports the increasing computational needs of advanced AI models, such as reasoning agents and interactive video generation.

The company says its DIMC architecture overcomes the memory bandwidth limitations of traditional inference systems by tightly coupling memory and compute within each chip. The platform scales using DMX Link for high-speed chiplet connectivity and DMX Bridge for inter-package communication. These capabilities, combined with native support for the Micro-scaling (MX) block floating point standard, enable Corsair to achieve ultra-fast processing speeds, making generative AI applications more practical for enterprise use.

Each Corsair PCIe Gen5 card includes 2400 TFLOPs of 8-bit compute power, 2 GB of integrated performance memory, and up to 256 GB of off-chip capacity memory. The system delivers a memory bandwidth of 150 Tbps—significantly outpacing HBM systems. Enterprises can achieve up to 10x faster token generation and 3x better cost and energy efficiency. Sampling for early-access customers has begun, with general availability expected in Q2 2025.

Key Points

• Technology: DIMC architecture integrates compute and memory for ultra-high bandwidth and low latency.

• Performance: 150 Tbps memory bandwidth, 2400 TFLOPs compute per card, 10x faster token generation speeds.

• Scalability: DMX Link™ for chiplet interconnect; DMX Bridge™ for multi-card scaling.

• Efficiency: 3x better TCO and energy efficiency than GPUs.

• Form Factor: Standard PCIe Gen5 full-height full-length cards.

Sid Sheth, CEO of d-Matrix, stated: “Corsair redefines AI inference with blazing-fast token generation and unparalleled scalability, making generative AI viable for enterprises worldwide.”

  • Earlier this year, d-Matrix introduced Jayhawk II, a next-generation generative AI compute platform designed to tackle the cost, latency, and scalability issues of deploying large language models (LLMs) in data centers. The silicon features an enhanced Digital In-Memory Compute (DIMC) engine paired with chiplet-based interconnect technology, utilizing the Open Compute Project’s Bunch of Wires (BoW) PHY interconnect standard. Jayhawk II delivers a 40x improvement in memory bandwidth compared to high-end GPUs, significantly boosting throughput and reducing latency for applications such as ChatGPT, Meta’s Llama2, and Falcon. Optimized for LLMs ranging from 3 billion to 40 billion parameters, Jayhawk II supports floating point and block floating point numerics, compression, and sparsity, achieving 10–20x better total cost of ownership (TCO) and inference performance versus GPU-based solutions. This platform builds on the original Jayhawk release, scaling from 30 TOPs/W to 150 TOPs/W on a 6nm process while enabling prompt caching for efficient generative AI workflows.
  • d-Matrix, established in 2019 by CEO Sid Sheth and CTO Sudeep Bhoja, focuses on high-efficiency AI compute solutions for data centers. Both founders bring extensive experience from their previous roles at Inphi and Broadcom, where they developed power-efficient compute and interconnect solutions for data centers over the past two decades. In September 2023, d-Matrix secured $110 million in Series B funding led by Temasek, with participation from investors including M12, Microsoft’s venture fund, and Playground Global. This funding supports the commercialization of d-Matrix’s Digital In-Memory Compute (DIMC) technology, aiming to enhance AI inference performance and efficiency.
Tags: d-Matrix
ShareTweetShare
Previous Post

Ekinops Launches100G for Edge Networks

Next Post

Lightmatter Picks GlobalFoundries to Produce its AI Interconnect

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

d-Matrix JetStream I/O Accelerator Targets Ultra-Low Latency AI Inference
All

d-Matrix Raises $275 Million to Accelerate the Age of AI Inference

November 12, 2025
Next Post
What’s hot at OFC22? GlobalFoundries on Co-packaged optics

Lightmatter Picks GlobalFoundries to Produce its AI Interconnect

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version