• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » d-Matrix Raises $275 Million to Accelerate the Age of AI Inference

d-Matrix Raises $275 Million to Accelerate the Age of AI Inference

November 12, 2025
in All
A A

d-Matrix secured $275 million in Series C funding to advance its full-stack inference platform for hyperscale and enterprise data centers. The round, which values the company at $2 billion, was led by Bullhound Capital, Triatomic Capital, and Temasek, with participation from QIA, EDBI, and Microsoft’s M12 venture fund. The investment brings total funding to $450 million as d-Matrix scales global deployments of its Corsair inference accelerators, JetStream networking NICs, and Aviator software suite.

Founded in 2019, d-Matrix has focused exclusively on AI inference—the stage where trained models run continuously at scale. Its platform integrates compute and memory in a single architecture to deliver up to 10× performance, 3× lower cost, and up to 5× energy efficiency over GPU-based systems. On a Llama 70B model, the platform achieves 30,000 tokens per second with 2 ms latency per token, allowing 100B-parameter models to run in a single rack. The company’s new SquadRack reference architecture—developed with Arista, Broadcom, and Supermicro—extends its open ecosystem approach.

Sid Sheth, CEO and co-founder of d-Matrix, said, “When we started d-Matrix six years ago, training was seen as AI’s biggest challenge, but we knew that a new set of challenges would be coming soon. We’ve spent the last six years building the solution: a fundamentally new architecture that enables AI to operate everywhere, all the time.”

• Series C: $275 million led by Bullhound Capital, Triatomic Capital, and Temasek

• Valuation: $2 billion | Total funding: $450 million

• HQ: Santa Clara, CA | Global offices in Toronto, Sydney, Bangalore, and Belgrade

• Core Products: Corsair inference accelerators, JetStream NICs, Aviator software stack

• Performance: 30 K tokens/s at 2 ms latency on Llama 70B; 100 B parameters in one rack

d-Matrix Portfolio Highlights

ProductType / FunctionKey CapabilitiesAI / Data Center Applications
Corsair™Inference AcceleratorCompute-in-memory architecture delivering up to 10× performance, 3× lower cost, and 3–5× better energy efficiency than GPUs. Enables ultra-dense inference workloads with low power per token.Large-language-model (LLM) inference; low-latency generative AI serving; efficient on-prem and cloud deployment at scale.
JetStream™Networking Accelerator (NIC)High-speed interconnect providing ultra-low-latency data exchange between inference nodes. Designed for disaggregated compute clusters and rack-scale networking.AI fabrics; multi-rack inference clusters; hybrid cloud and sovereign AI connectivity.
Aviator™Software Stack / RuntimeFull-stack inference software for orchestration, workload scheduling, telemetry, and latency optimization. Integrates seamlessly with Corsair and JetStream hardware.Inference orchestration for hyperscalers; real-time token streaming; workload management for sovereign AI clouds.
SquadRack™Reference ArchitectureOpen, standards-based rack-level blueprint developed with Arista, Broadcom, and Supermicro. Enables interoperable, vendor-neutral inference deployments.Rack-scale integration; OEM/ODM systems; interoperable AI infrastructure for enterprise and hyperscale data centers.

🌐 Analysis:

The funding underscores how AI inference has become the next battleground in AI infrastructure as training hardware saturates hyperscaler budgets. d-Matrix’s compute-in-memory architecture tackles the latency and power limits that GPUs face in serving massive language models. The company’s ecosystem alliances with Arista and Broadcom link it to key networking and silicon supply chains, while backing from Microsoft’s M12 suggests future alignment with Azure AI deployments.

🌐 We’re tracking the latest developments in networking silicon. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/

🌐 We’re launching the “Data Center Networking for AI” series on NextGenInfra.io and inviting companies building real solutions—silicon, optics, fabrics, switches, software, orchestration—to share their views on video and in our expert report. To get involved, send a note to jcarroll@convergedigest.com or info@nextgeninfra.io.

Tags: d-Matrix
ShareTweetShare
Previous Post

Cisco Sees Surge in AI Networking as Refresh Cycles Accelerate

Next Post

Microsoft Links Wisconsin + Atlanta Data Centers to Create Distributed AI Superfactory

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

d-Matrix Debuts Corsair AI Platform with 150 Tbps Bandwidth for AI Inference
Semiconductors

d-Matrix Debuts Corsair AI Platform with 150 Tbps Bandwidth for AI Inference

November 19, 2024
Next Post
Microsoft Links Wisconsin + Atlanta Data Centers to Create Distributed AI Superfactory

Microsoft Links Wisconsin + Atlanta Data Centers to Create Distributed AI Superfactory

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version