Hewlett Packard Enterprise (HPE) introduced its latest system for large-scale AI model training. The HPE ProLiant Compute XD685 leverages AMD’s 5th Gen EPYC processors and Instinct MI325X accelerators to support natural language processing, large language models, and multi-modal AI training.
The XD685 offers flexibility through its modular 5U chassis, which supports a wide range of GPUs, CPUs, software, and cooling solutions, including direct liquid cooling for improved energy efficiency. This design helps AI developers reduce time-to-market for their projects while optimizing the use of resources. The system’s architecture includes secure management through HPE’s Integrated Lights-Out (iLO) technology and HPE Performance Cluster Manager for easy operation of large AI clusters.
HPE’s announcement also highlighted the ProLiant XD685’s capability to deliver strong-scaling and parallel computing for efficient AI training. It supports up to eight AMD Instinct MI325X accelerators with 6 Tbps of memory bandwidth, making it an attractive option for large-scale AI deployments. The server will be available for order starting in Q1 2025.
• Powered by AMD 5th Gen EPYC processors and Instinct MI325X accelerators
• Modular 5U chassis supports various components, including up to eight GPUs
• Direct liquid cooling option for energy efficiency and sustainability
• Integrated management through HPE iLO and HPE Performance Cluster Manager
• Available for order in Q1 2025:
“Training large language models, and doing so efficiently, requires strong-scaling, massive parallel computing capabilities, and unique services that only HPE’s high-performance computing solutions deliver,” said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE.







