AMD and OpenAI have signed a multi-year, multi-generation agreement to deploy up to 6 gigawatts of AMD Instinct GPUs, beginning with the MI450 series in 2H 2026. The deal positions AMD as a primary compute supplier for OpenAI’s future AI factories, extending a collaboration that began with MI300X and MI350X accelerators.
The agreement gives OpenAI long-term access to AMD’s next-generation GPU roadmap while granting OpenAI a warrant for up to 160 million AMD shares, tied to deployment and share-price milestones. AMD expects “tens of billions” in revenue as installations scale to 6 GW. The transaction follows OpenAI’s earlier equity-linked deal with NVIDIA, though this time the supplier—AMD—is the one issuing equity to the customer.
Beyond the headline number, this partnership marks a major endorsement of AMD’s full-stack AI strategy, revealed earlier this year. That strategy spans compute, interconnect, and software layers: Instinct GPUs; ROCm and PyTorch support; Infinity Fabric for scaling; and UALink, the open interconnect standard AMD is co-developing with Broadcom, Cisco, HPE, Intel, and others. By tying OpenAI’s infrastructure roadmap to AMD’s multi-generation stack, this deal validates the company’s approach to open, multi-vendor AI architectures—a counterweight to NVIDIA’s tightly integrated CUDA ecosystem.
At the system level, AMD is emphasizing rack-scale and cluster-scale AI design, not just chips. Its work on UALink and Infinity Fabric aims to interconnect thousands of GPUs with sub-microsecond latency, paving the way for modular AI factories built on interoperable fabrics. If OpenAI adopts these technologies at scale, it could accelerate a shift toward vendor-neutral AI infrastructure, lowering barriers for hyperscalers and sovereign AI initiatives.
• 6-GW AMD Instinct GPU agreement spans multiple generations
• First 1-GW MI450 deployment begins in 2H 2026
• 160M-share AMD warrant issued to OpenAI, tied to performance milestones
• AMD projects “tens of billions” in revenue and long-term alignment
• Deal reinforces AMD’s full-stack AI strategy, from ROCm to UALink
“We are thrilled to partner with OpenAI to deliver AI compute at massive scale,” said Dr. Lisa Su, chair and CEO of AMD. “This partnership brings the best of AMD and OpenAI together to create a true win-win enabling the world’s most ambitious AI buildout and advancing the entire AI ecosystem.”

🌐 Analysis:
The equity-linked structure in this deal reverses the dynamic seen in recent supplier-customer relationships. Earlier this year, OpenAI’s pact with NVIDIA involved OpenAI offering stock to NVIDIA in exchange for long-term GPU supply—a way for a capital-constrained buyer to secure compute capacity. In contrast, AMD, the supplier, is issuing stock to the customer, OpenAI. The structure acts less as a “discount” and more as a performance-based incentive: OpenAI gains upside in AMD’s share price as deployments scale, while AMD locks in multi-gigawatt GPU orders over multiple product generations.
This inversion may raise questions in financial markets about valuation and accounting—particularly whether such warrants represent revenue discounts, rebates, or long-term investment alignment. The arrangement also highlights how strategic partnerships in AI infrastructure are evolving beyond conventional vendor-customer models into hybrid equity and technology alliances. Investors may find it difficult to benchmark such deals against traditional hardware sales, but they underscore a new era of co-investment between AI model developers and silicon suppliers.