• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Cerebras Expands with Six New AI Data Centers

Cerebras Expands with Six New AI Data Centers

March 13, 2025
in Data Centers
A A

Cerebras Systems has announced the launch of six new AI inference datacenters across North America and Europe, significantly expanding its capacity to support high-speed AI inference. Powered by Cerebras Wafer-Scale Engines (CS-3 systems), these facilities will deliver over 40 million tokens per second, making Cerebras the largest dedicated AI inference cloud provider. The datacenters in Oklahoma City and Montreal will be exclusively owned and operated by Cerebras, while the remaining locations will be jointly operated with strategic partner G42. With 85% of capacity based in the U.S., Cerebras is strengthening domestic AI infrastructure, catering to enterprises, governments, and developers worldwide.

This expansion follows growing demand for Cerebras’ AI inference solutions, with customers such as Mistral AI, Perplexity, Hugging Face, and AlphaSense adopting its platform for ultra-fast model inference. The Oklahoma City Scale Datacenter, launching in June 2025, will house 300+ CS-3 systems in a tornado- and seismically-shielded facility with custom water-cooling technology for large-scale AI workloads. The Enovum Montreal datacenter, set to go live in July 2025, will bring Cerebras-powered AI inference to the Canadian tech ecosystem for the first time, offering 10x faster speeds than leading GPUs. By Q4 2025, additional facilities in the Midwest, Eastern U.S., and Europe will come online, reinforcing Cerebras’ position as a market leader in AI inference.

Key Points:

• Six new Cerebras AI inference datacenters launching across North America and Europe in 2025.

• 40 million+ tokens per second capacity, making it the largest dedicated AI inference cloud.

• Exclusive Cerebras-owned facilities in Oklahoma City and Montreal, with others co-operated by G42.

• Major AI customers, including Mistral AI, Perplexity, Hugging Face, and AlphaSense, leverage Cerebras for ultra-fast inference.

• Oklahoma City Scale Datacenter to house 300+ CS-3 systems in a Level 3+ secure, high-efficiency facility.

• Enovum Montreal facility will accelerate AI adoption in Canada, providing 10x faster inference than GPUs.

“Cerebras is turbocharging the future of U.S. AI leadership with unmatched performance, scale, and efficiency. These new global datacenters will serve as the backbone for the next wave of AI innovation,” said Dhiraj Mallick, COO of Cerebras Systems.

Tags: Cerebras
ShareTweetShare
Previous Post

CoolIT Unveils 4000W Coldplate, Advancing Single-Phase Liquid Cooling

Next Post

Dell’Oro: Rising Sales of Cloud-Friendly Network Security Solutions

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cerebras + Ranovus = Wafer-Sale Compute + Co-Packaged Optics
All

Cerebras + Ranovus = Wafer-Sale Compute + Co-Packaged Optics

April 1, 2025
Cerebras Wafer Scale Engine packs 1.2 trillion transistors
Financials

Cerebras Files for IPO, Aiming to Revolutionize AI with its Wafer-Scale Chip

October 5, 2024
Cerebras packs 4 trillion transistor into CS-3 AI processor
Semiconductors

Cerebras Signs Aramco to Accelerate AI in Saudi Arabia

September 11, 2024
Cerebras Launches AI Inference Solution 20x Faster Than GPUs”
All

Cerebras Launches AI Inference Solution 20x Faster Than GPUs”

August 27, 2024
Cerebras packs 4 trillion transistor into CS-3 AI processor
Semiconductors

Cerebras packs 4 trillion transistor into CS-3 AI processor

March 14, 2024
Cerebras builds Condor Galaxy AI supercomputer cluster
Semiconductors

Cerebras builds Condor Galaxy AI supercomputer cluster

July 24, 2023
Next Post
Dell’Oro: Wireless LAN revenues defy ongoing supply problems in Q3

Dell'Oro: Rising Sales of Cloud-Friendly Network Security Solutions

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version