Lambda, which brands itself as the “Superintelligence Cloud,” has integrated Supermicro’s GPU-optimized servers to accelerate training and inference workloads for enterprise, hyperscaler, and research customers. The first of these large-scale “AI factories” went live at Cologix’s COL4 Scalelogix data center in Columbus, Ohio, earlier this summer.
The rollout includes a range of Supermicro systems — SYS-A21GE-NBRT with NVIDIA HGX B200, SYS-821GE with HGX H200, and SYS-221HE-TNR — all powered by Intel Xeon Scalable processors. Lambda also tapped Supermicro’s AI Supercluster architecture, featuring NVIDIA GB200 and GB300 NVL72 racks, designed to handle massive-scale model training. Liquid-cooled designs aim to reduce power and cooling costs while supporting denser AI clusters at scale.
Cologix is positioning its interconnected Columbus facilities as a key hub for AI-driven workloads in the Midwest, serving industries such as healthcare, finance, retail, logistics, and manufacturing. By combining Supermicro’s hardware portfolio, Lambda’s AI-focused customer base, and Cologix’s fiber-rich interconnection platform, the partners say they can deliver low-latency, production-ready AI compute with rapid deployment timelines.
• Lambda deployed Supermicro’s GPU-optimized servers with NVIDIA Blackwell GPUs for AI factory-scale training and inference.
• Systems include SYS-A21GE-NBRT (HGX B200), SYS-821GE (HGX H200), and SYS-221HE-TNR with Intel Xeon Scalable processors.
• Supermicro AI Supercluster racks with NVIDIA GB200 and GB300 NVL72 integrated for hyperscale workloads.
• Advanced liquid cooling supports power efficiency and dense AI cluster designs.
• Initial rollout launched at Cologix’s COL4 Scalelogix data center in Columbus, Ohio.
“Supermicro is excited to collaborate with Lambda on powerful technology to push the boundaries of AI infrastructure,” said Vik Malyala, SVP, Technology & AI at Supermicro.
🌐 Analysis: Lambda’s choice of Supermicro reflects a broader industry trend toward flexible, liquid-cooled GPU server architectures optimized for AI factories. As hyperscalers and AI cloud providers race to deploy next-gen Blackwell GPUs, system integrators like Supermicro are playing a critical role in balancing density, power, and thermal challenges. This collaboration also underscores Columbus’ emergence as a secondary AI infrastructure hub, competing with traditional strongholds such as Northern Virginia and Silicon Valley for large-scale AI deployments.






