Supermicro has expanded its NVIDIA Blackwell-based AI server portfolio with new front I/O configurations in both liquid- and air-cooled designs, aimed at improving efficiency, scalability, and serviceability in large-scale AI factories. The new 4U DLC-2 direct liquid-cooled system delivers up to 40% data center power savings and supports dual-socket Intel Xeon 6 6700 Series processors with eight NVIDIA HGX B200 GPUs connected via 5th Gen NVLink at 1.8TB/s. The system offers up to 8TB of DDR5 memory, 8 hot-swap E1.S NVMe bays, and front-accessible NICs, DPUs, storage, and management ports. Warm-water cooling at inlet temperatures up to 45°C reduces chiller requirements and cuts water consumption by up to 40%.
The new 8U front I/O air-cooled system mirrors the architecture of the DLC-2 model but is designed for facilities without liquid cooling infrastructure. It maintains full GPU tray height while using a reduced-height CPU tray for a more compact footprint, supporting the same CPU, GPU, and memory configurations. Both systems are optimized for large-scale AI training and inference workloads, leveraging NVIDIA’s Quantum-2 InfiniBand and Spectrum-X Ethernet for high-performance compute fabrics and featuring front-accessible 400G networking for streamlined cold-aisle cable management.
Supermicro now offers one of the broadest NVIDIA HGX B200 portfolios, with two front I/O and six rear I/O systems. The new designs aim to address operational challenges in AI data centers, from thermal management to deployment speed. “Supermicro’s DLC-2 enabled NVIDIA HGX B200 system leads our portfolio to achieve greater power savings and faster time to online for AI Factory deployments,” said Charles Liang, CEO and president of Supermicro. “Our Building Block architecture enables us to quickly deliver solutions exactly as our customers request. Supermicro’s extensive portfolio now can offer precisely optimized NVIDIA Blackwell solutions to a diverse range of AI infrastructure environments, whether deploying into an air- or liquid-cooled facility.”
• 4U DLC-2 liquid-cooled model offers up to 40% power savings and 40% water use reduction
• 8U air-cooled version provides cold aisle serviceability without liquid cooling
• Both systems support 8× NVIDIA HGX B200 GPUs with 1.4TB HBM3e total GPU memory
• Front I/O design improves cable management and speeds AI factory deployment
• NVIDIA Blackwell GPUs deliver up to 15× faster inference and 3× faster LLM training vs. Hopper
🌐 Why it Matters
High-density AI workloads are pushing the limits of traditional air cooling, driving the need for integrated direct liquid-cooling solutions. Supermicro’s DLC-2 systems combine thermal efficiency, cold-aisle serviceability, and scalability to address operational cost pressures in AI factories. By supporting NVIDIA’s latest GPU architecture, these systems position hyperscalers and enterprises to accelerate AI deployment while meeting environmental and efficiency goals.


