Infineon Technologies is teaming up with NVIDIA to develop a centralized 800V high-voltage direct current (HVDC) power architecture aimed at AI data centers. The joint initiative marks the industry’s first move to transition from decentralized to centralized power delivery for AI server racks. With this architecture, energy-efficient power conversion will happen directly at the GPU level within the server board, targeting future systems that demand 1 megawatt or more per rack.
The centralized HVDC approach is designed to reduce the number of power conversion stages, optimize space within server racks, and increase the reliability and scalability of AI infrastructure. Infineon will apply its expertise in power semiconductors—including silicon (Si), silicon carbide (SiC), and gallium nitride (GaN)—to support this evolution. The company also continues to back multiphase and intermediate architectures to meet the varying needs of hyperscalers and AI data center operators.
Current AI data center designs rely on distributed power supply units (PSUs) that introduce inefficiencies. By consolidating power generation and using high-density, centralized distribution at 800V, Infineon and NVIDIA aim to define new standards for energy efficiency and scalability. These changes come as AI workloads intensify, with deployments already exceeding 100,000 GPUs.
- Infineon and NVIDIA co-develop 800V HVDC centralized power architecture
- Direct GPU-level power conversion within server boards
- Supports power demands of 1 MW+ per rack in future AI data centers
- Infineon leverages Si, SiC, and GaN technologies for power semiconductors
- Architecture reduces conversion stages and increases rack density efficiency
“Infineon is driving innovation in artificial intelligence,” said Adam White, Division President Power & Sensor Systems at Infineon. “The combination of Infineon’s application and system know-how in powering AI from grid to core, combined with NVIDIA’s world-leading expertise in accelerated computing, paves the way for a new standard for power architecture in AI data centers to enable faster, more efficient and scalable AI infrastructure.”







