Supermicro has begun shipping NVIDIA Blackwell Ultra systems and rack-scale solutions in high volume to customers worldwide. The company is delivering pre-validated NVIDIA HGX B300 systems and GB300 NVL72 racks that support plug-and-play deployment at the system, rack, and full data center scale. These configurations aim to accelerate the rollout of large AI factories capable of handling training, inference, and multimodal workloads.
The Blackwell Ultra platforms support up to 1,400W per GPU, with 50% greater inference performance using FP4 compute and 50% more HBM3e memory compared to earlier Blackwell systems. At the rack scale, Supermicro’s GB300 NVL72 achieves 1.1 exaFLOPS of FP4 compute, while its HGX B300 systems deliver up to 144 petaFLOPS of FP4 compute and 270 GB of HBM3e per GPU—representing 7.5x performance gains over Hopper-based systems. To maximize performance and efficiency, Supermicro integrates both advanced air-cooling and direct liquid cooling (DLC) technologies.
Supermicro also provides reference architecture solutions for 800 Gb/s NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet fabrics, leveraging ConnectX-8 SuperNICs. Its Data Center Building Block Solutions (DCBBS) combine hardware with deployment services, including cabling, power, and thermal systems, as well as NVIDIA AI Enterprise, Blueprints, and NIM software integration. The company claims its DLC-2 stack enables up to 40% power savings, 60% smaller footprint, and 40% less water consumption, lowering total cost of ownership by about 20%.
• Volume shipments of NVIDIA Blackwell Ultra HGX B300 systems and GB300 NVL72 racks
• Plug-and-play deployment at system, rack, and cluster scales
• GB300 NVL72 delivers 1.1 exaFLOPS dense FP4 compute performance
• HGX B300 offers 144 petaFLOPS FP4 compute, 270 GB HBM3e memory per GPU
• 800 Gb/s InfiniBand or Spectrum-X Ethernet fabrics supported
• Advanced air and direct liquid cooling reduce power, space, and water use
“Supermicro has the best track record of fast and successful deployments of new NVIDIA technologies,” said Charles Liang, president and CEO of Supermicro. “Through Supermicro Data Center Building Block Solutions with our expertise in on-site deployment, we enable turn-key delivery of the highest-performance AI platform — critical for customers seeking to invest in cutting-edge technology.”
🌐 Analysis: Supermicro’s rapid ramp of NVIDIA Blackwell Ultra systems underscores its role as one of NVIDIA’s closest system integration partners. By focusing on turnkey, rack-scale deployments with both air- and liquid-cooled options, the company is positioning itself as a primary supplier to hyperscalers racing to build AI factories. The emphasis on efficiency and TCO reduction also responds to rising scrutiny over the energy and water demands of AI infrastructure. Competitors like Dell, HPE, and Inspur are likely to accelerate similar offerings, but Supermicro’s early volume availability could help cement its lead in Blackwell Ultra deployments.
🌐 We’re tracking the latest developments in AI infrastructure and data center systems. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/
