• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » NeoClouds: New Powerhouses of AI Infrastructure

NeoClouds: New Powerhouses of AI Infrastructure

October 6, 2025
in AI Infrastructure, Feature
A A

Defining the NeoCloud Era

A new class of infrastructure provider—the NeoCloud—has emerged to power the exponential growth of artificial intelligence. These companies sit between hyperscalers and traditional colocation operators, purpose-built to deliver GPU-accelerated compute for AI training, inference, and high-performance computing (HPC).

Instead of broad, multi-service portfolios like AWS, Azure, or Google Cloud, NeoClouds specialize in GPU-as-a-Service (GPUaaS)—bare-metal or near-metal access to the latest NVIDIA and AMD accelerators with InfiniBand or NVLink interconnects and AI-optimized software stacks. Their business models hinge on rapid deployment, energy efficiency, and cost predictability—offering developers queue-free compute at two to seven times lower cost than general-purpose hyperscale cloud instances.

Tremendous Investment and Rapid Build-outs

The rise of NeoClouds mirrors the surge of global AI data-center investment.  Dell’Oro Group reports that AI-optimized servers already account for roughly one-third of worldwide data-center spending and could surpass half of all infrastructure investment by 2029.  IDC forecasts that cloud infrastructure spending will grow more than 30 % year-over-year in 2025, largely driven by GPU-rich AI configurations.

This explosive trajectory is fueled by tens of billions in new financing—both equity and debt—as NeoClouds rush to secure GPUs, power, and space. Many have transitioned from crypto-mining or rendering backgrounds to become full-scale AI infrastructure specialists. Others are building multi-gigawatt campuses tied to long-term renewable power contracts, betting that AI workloads will demand both density and sustainability.

A Fast-Evolving Market and Business Model

The NeoCloud business model evolves almost monthly.  Some players operate as publicly traded data-center REITs, funding massive, energy-integrated campuses. Others concentrate on AI orchestration software, offering managed clusters or LLM-specific tooling atop their GPU infrastructure.

Despite varied strategies, three common pressures shape the field:

  • Hardware access – early allocation of scarce GPUs (Hopper, Blackwell, Rubin).
  • Networking and cooling – low-latency fabrics and liquid-cooled designs to sustain cluster performance.
  • Utilization discipline – balancing rapid capacity growth with stable demand and contract bookings.

A Big List of NeoClouds (October 2025)

The following table provides a ranked snapshot of the major NeoCloud providers by estimated GPU capacity.  It highlights their funding, infrastructure footprint, and geographic reach—illustrating how each contributes to the AI compute backbone.

CompanyFocus / Stack / PartnersFunding / StatusGPU Capacity & Major Locations*
CoreWeaveGPUaaS for AI training, inference, and VFX; bare-metal with InfiniBand/NVLink; partners: OpenAI, Microsoft, NVIDIA, IBM.$12.7 B total (equity + debt); NVIDIA strategic investor.≈ 250 k H100 equiv (claimed); 32 sites across US, UK & EU.
Lambda LabsGPU cloud + servers; bare-metal clusters to 1 k GPUs; partners: NVIDIA, Meta, Hugging Face.$500 M+ funding; ≈ $1.5 B valuation.≈ 50 k GPUs (est.); US (SF, TX).
IREN (Iris Energy)Renewable-powered GPUaaS; liquid-cooled; partners: NVIDIA, AMD, Poolside AI.Public (NASDAQ); $674 M GPU capex 2025.23 k GPUs (disclosed); US (TX), Canada (BC).
CrusoeLow-carbon GPU DCs using stranded gas and renewables; partners: Oracle, NVIDIA, Microsoft.$1.26 B+ funding (incl. debt).≈ 40 k GPUs (est.); US (ND, TX, WY), Iceland planned.
NebiusEU-centric GPU platform with custom HW for sovereign AI; partners: Microsoft, NVIDIA, Accel.$700 M+ funding; private.≈ 30 k GPUs (est.); Finland, France, US, Iceland.
VultrFull-stack cloud offering GPUaaS; developer-centric; partners: NVIDIA, StackPath.$146 M equity; $500 M+ debt.≈ 20 k GPUs (est.); 32 regions worldwide.
Fermi AmericaEnergy-integrated AI campuses (SMR/wind/solar); partners: Texas Tech, NuScale, NextEra.Public (IPO Oct 2025); pre-ops.Pre-deployment; Amarillo TX + planned NM sites.
NscaleNordic hydro-powered AI clusters for sovereign compute; partners: OpenAI/Aker, NVIDIA.$200 M+ funding (2025).≈ 10 k GPUs (est.); Norway & Sweden.
Paperspace (DigitalOcean)MLOps + GPU instances within DigitalOcean; partners: NVIDIA, Hugging Face.Acquired 2023 by DigitalOcean.≈ 15 k GPUs (est.); US (NJ, CA), EU (AMS).
RunPodOn-demand pods with Secure/Community Clouds; partners: NVIDIA, Stability AI.$100 M+ funding (2024).≈ 8 k GPUs (est.); Global (US/EU/Asia).
Voltage ParkBare-metal deep-learning clusters; partners: NVIDIA, Equinix.$150 M Series B (2025).≈ 7 k GPUs (est.); US (WA, TX, VA, UT).
Applied Digital (APLD)US GPUaaS at high-density ND sites; partner: NVIDIA.Public (NASDAQ); ≈ $500 M funding.≈ 5 k GPUs (est.); North Dakota campuses.
Together AIOpen-source LLM platform + GPU cloud; partner: Meta.$200 M funding.≈ 4 k GPUs (est.); Global (25+ cities).

*GPU capacities are company-reported or analyst-estimated; independent verification varies.

The Strategic Impact of NeoClouds

NeoClouds have fractured the old model of hyperscale dominance by proving that specialization can outperform scale—especially when the bottleneck is GPUs, not general compute. By focusing on purpose-built interconnects, efficient cooling, and rapid provisioning, they have given AI developers an alternative to long hyperscaler queues and unpredictable availability. In many ways, NeoClouds are serving as the shock absorbers of the AI boom, bridging global demand and constrained chip supply.

Their rise is also catalyzing change upstream and downstream. Upstream, GPU vendors now cultivate strategic relationships with these operators, effectively treating them as “Tier 1” customers alongside hyperscalers. Downstream, enterprises and startups gain access to on-demand, sovereign, or sustainable AI compute—often in regions where hyperscaler infrastructure remains limited. As Dell’Oro Group notes, this diffusion of capital and expertise is accelerating the overall AI build-out, reinforcing demand for high-performance servers, power, and networking fabrics.

Data Center Networking for AI Series
Join the Conversation:
Data Center Networking for AI
Converge Digest and NextGenInfra.io are bringing together the leaders shaping AI-driven data center networks—from optics and fabrics to silicon and orchestration. Explore how the industry is re-architecting the network for the AI era through exclusive video interviews, expert reports, and collaboration opportunities.
Learn More & Participate
Tags: Feature
ShareTweetShare
Previous Post

AMD and OpenAI Ink 6-Gigawatt GPU Deal

Next Post

The Power Race: Why Energy Is the New Bottleneck for AI

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

The Power Race: Why Energy Is the New Bottleneck for AI
AI Infrastructure

The Power Race: Why Energy Is the New Bottleneck for AI

October 7, 2025
Next Post
The Power Race: Why Energy Is the New Bottleneck for AI

The Power Race: Why Energy Is the New Bottleneck for AI

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version