• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Thursday, April 16, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Supermicro Expands NVIDIA HGX B200 Portfolio with Front I/O Liquid and Air Cooling

Supermicro Expands NVIDIA HGX B200 Portfolio with Front I/O Liquid and Air Cooling

August 11, 2025
in Data Centers
A A

Supermicro has expanded its NVIDIA Blackwell-based AI server portfolio with new front I/O configurations in both liquid- and air-cooled designs, aimed at improving efficiency, scalability, and serviceability in large-scale AI factories. The new 4U DLC-2 direct liquid-cooled system delivers up to 40% data center power savings and supports dual-socket Intel Xeon 6 6700 Series processors with eight NVIDIA HGX B200 GPUs connected via 5th Gen NVLink at 1.8TB/s. The system offers up to 8TB of DDR5 memory, 8 hot-swap E1.S NVMe bays, and front-accessible NICs, DPUs, storage, and management ports. Warm-water cooling at inlet temperatures up to 45°C reduces chiller requirements and cuts water consumption by up to 40%.

The new 8U front I/O air-cooled system mirrors the architecture of the DLC-2 model but is designed for facilities without liquid cooling infrastructure. It maintains full GPU tray height while using a reduced-height CPU tray for a more compact footprint, supporting the same CPU, GPU, and memory configurations. Both systems are optimized for large-scale AI training and inference workloads, leveraging NVIDIA’s Quantum-2 InfiniBand and Spectrum-X Ethernet for high-performance compute fabrics and featuring front-accessible 400G networking for streamlined cold-aisle cable management.

Supermicro now offers one of the broadest NVIDIA HGX B200 portfolios, with two front I/O and six rear I/O systems. The new designs aim to address operational challenges in AI data centers, from thermal management to deployment speed. “Supermicro’s DLC-2 enabled NVIDIA HGX B200 system leads our portfolio to achieve greater power savings and faster time to online for AI Factory deployments,” said Charles Liang, CEO and president of Supermicro. “Our Building Block architecture enables us to quickly deliver solutions exactly as our customers request. Supermicro’s extensive portfolio now can offer precisely optimized NVIDIA Blackwell solutions to a diverse range of AI infrastructure environments, whether deploying into an air- or liquid-cooled facility.”

• 4U DLC-2 liquid-cooled model offers up to 40% power savings and 40% water use reduction

• 8U air-cooled version provides cold aisle serviceability without liquid cooling

• Both systems support 8× NVIDIA HGX B200 GPUs with 1.4TB HBM3e total GPU memory

• Front I/O design improves cable management and speeds AI factory deployment

• NVIDIA Blackwell GPUs deliver up to 15× faster inference and 3× faster LLM training vs. Hopper

🌐 Why it Matters

High-density AI workloads are pushing the limits of traditional air cooling, driving the need for integrated direct liquid-cooling solutions. Supermicro’s DLC-2 systems combine thermal efficiency, cold-aisle serviceability, and scalability to address operational cost pressures in AI factories. By supporting NVIDIA’s latest GPU architecture, these systems position hyperscalers and enterprises to accelerate AI deployment while meeting environmental and efficiency goals.

Tags: Super Micro
ShareTweetShare
Previous Post

Celona Targets Industrial AI with Cloud-Managed Private 5G Platform

Next Post

Nokia, MX Fiber Light Up 1,800 km Optical Backbone in Mexico

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Supermicro Drives introduces 100/25 Gbit/s networking solutions for data centres
All

Supermicro Drives introduces 100/25 Gbit/s networking solutions for data centres

May 2, 2017
Next Post
BerryComm Expands Central Indiana Fiber with Nokia

Nokia, MX Fiber Light Up 1,800 km Optical Backbone in Mexico

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version