Anthropic announced plans to expand its use of Google Cloud technologies, including up to one million Tensor Processing Units (TPUs), in a deal valued in the tens of billions of dollars. The move marks one of the largest single cloud AI infrastructure expansions to date and is expected to add more than one gigawatt of computing capacity online in 2026.
Google Cloud CEO Thomas Kurian said Anthropic’s expansion underscores the performance and efficiency advantages of TPUs, noting the company’s latest seventh-generation TPU, Ironwood, continues to drive cost and energy improvements. Anthropic, which now serves more than 300,000 business customers, has seen its large enterprise accounts—those generating over $100,000 in annual run-rate revenue—grow sevenfold in the past year.
The expanded TPU capacity will power Anthropic’s Claude models for large-scale enterprise deployments, enhanced model alignment, and testing. While Anthropic is deepening its Google partnership, the company emphasized its diversified compute strategy, leveraging Google TPUs, Amazon’s Trainium, and NVIDIA GPUs. Anthropic reaffirmed its commitment to Amazon as its primary training partner through Project Rainier—a U.S.-based supercluster hosting hundreds of thousands of AI chips.
“Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work,” said Krishna Rao, CFO of Anthropic. “This expanded capacity ensures we can meet exponentially growing demand while keeping our models at the cutting edge of the industry.”

🌐 Analysis: provider alongside NVIDIA and AWS. Google’s TPU roadmap has evolved rapidly—from TPU v1 (2015) for inference acceleration, to TPU v4 clusters powering Google’s internal AI workloads, to the current TPU v7 Ironwood, unveiled in 2025. Ironwood delivers up to 2.5× higher performance per watt than its predecessor (TPU v5p), and supports both transformer-based training and large-scale inference with integrated high-bandwidth interconnects.
Google’s TPU roadmap points toward TPU v8 “Sequoia”, expected in 2026, built on a 3nm process node with optical I/O capabilities and tight integration with Google’s custom networking fabric. These next-generation clusters are designed to rival NVIDIA’s GB200 NVL72 systems and AWS’s Trainium2 superclusters in both scale and energy efficiency.
By deploying up to one million TPUs, Anthropic effectively becomes one of the largest external users of Google’s AI accelerator infrastructure, providing a high-profile validation of TPU scalability and cost competitiveness. The expansion also positions Google Cloud as a key training backbone for Claude’s next-generation multimodal and reasoning models, as AI infrastructure transitions toward exascale computing.







