AI research firm Anthropic has announced an expanded partnership with Google, securing access to one million Tensor Processing Units (TPUs) in a deal valued at tens of billions of dollars. The collaboration aims to significantly enhance Anthropic’s computing power as it advances its next-generation Claude AI models.
Under the new agreement, Anthropic will gain access to more than one gigawatt of computing capacity by 2026, hosted on Google Cloud’s infrastructure. This expansion builds upon the companies’ existing collaboration that began in early 2023, reinforcing Anthropic’s commitment to multi-platform training strategies and sustainable AI scaling.
Accelerating Claude’s Evolution
Anthropic cited Google’s TPUs for their “strong price-performance ratio and efficiency,” adding that the infrastructure upgrade will accelerate the training and deployment of future Claude models. The company currently offers multiple Claude variants—including Claude 4.5 Sonnet, 4.5 Haiku, and 4.1 Opus—each optimized for distinct enterprise use cases such as coding assistance, analytics, and reasoning.
The expanded compute resources are expected to support Anthropic’s growing enterprise customer base, which has surged past 300,000 businesses worldwide. The number of large enterprise accounts—those generating more than $100,000 in annual revenue—has increased nearly sevenfold over the past year.
Google’s Expanding Role in AI Compute
For Google, the deal strengthens its position as a global AI infrastructure leader, showcasing the scalability of its TPU-based architecture as a competitive alternative to Nvidia’s GPUs. “Anthropic’s decision to expand its usage of TPUs reflects the strong performance and efficiency our clients have seen over the years,” said Thomas Kurian, CEO of Google Cloud.
Google’s TPUs—custom-built chips designed for AI workloads—are also used internally to train Gemini, the company’s flagship AI model. The tech giant recently unveiled its Ironwood (TPU v7) architecture, underscoring its continued investment in AI infrastructure innovation.
Multi-Cloud Strategy for AI Resilience
Anthropic said it will continue leveraging multiple compute providers, including Amazon’s Trainium and Nvidia’s GPU systems, to ensure operational flexibility and prevent dependency on a single vendor. This diversified compute strategy allows Anthropic to optimize costs, availability, and sustainability while scaling up for increasingly complex model architectures.
Industry analysts say the expanded partnership positions Anthropic to compete more aggressively with OpenAI and other leading AI labs in the race for next-generation foundational models.
