Google Cloud has placed agentic AI at the core of its enterprise‑monetisation plan, positioning autonomous digital assistants as the primary engine for growth rather than a single‑point feature. At its annual cloud conference in Las Vegas, Alphabet CEO Sundar Pichai and Google Cloud chief Thomas Kurian framed the “experimental phase” of AI as largely over, arguing that the next challenge for enterprises is integrating AI deeply into core workflows and value chains.
Alphabet has signalled that it will spend $175 billion to $185 billion this year, with just over half of that machine‑learning compute investment channelled into Google Cloud, which also powers Google DeepMind and other internal AI units. The cloud‑AI bundle is being positioned as the backbone for large‑scale enterprise AI, not just a sandbox for models.
Google Cloud Unifies AI Under Gemini Enterprise
Google has unified a suite of AI products under Gemini Enterprise, most notably rebranding and expanding Vertex AI into a platform for selecting, customising, and orchestrating AI models for business use. The company said the shift reflects a move from “old‑style machine learning” to a surge in customers building custom AI agents that orchestrate multi‑step workflows.
The Gemini Enterprise agent platform now includes capabilities such as Agent Studio, a low‑code interface for non‑technical users to design agents using natural language; Agent Identity, which assigns cryptographic IDs and authorization policies to each agent; and Agent Gateway plus Agent Anomaly Detection, tools designed to enforce security rules and flag suspicious behaviour.
New $750 Million Partner Fund for Agentic AI
Google Cloud has announced a $750 million fund to accelerate AI‑driven transformations through its 120,000‑member partner ecosystem. The fund is aimed at global consulting firms, systems integrators, software providers, and channel partners, and supports AI‑value identification, agentic AI prototyping, agent building and deployment, upskilling, and embedded Google Forward‑Deployed Engineering (FDE) teams.
Today, Google Cloud’s partner network already includes more than 330,000 experts trained on implementing Google AI, and 95 per cent of the top 20 and over 80 per cent of the top 100 SaaS companies use Gemini models. The new financing is intended to deepen this capability, helping partners assess AI‑opportunity scales, prototype and prove value, and integrate AI agents into existing software and workflows for joint customers.
Tools, Forward‑Deployed Engineers, and Gemini Enterprise Practices
The fund will support new tools and resources, including AI‑value assessments, Gemini proofs‑of‑concept, Gemini Enterprise practice building, agentic AI prototyping and deployment, Wiz security assessments, and usage incentives to accelerate AI adoption.
Google will embed FDEs alongside major partners such as Accenture, Capgemini, Cognizant, Deloitte, HCLTech, PwC, and TCS to support complex customer deployments and technical problem‑solving.
Several AI‑native services firms, including Altimetrik, Artefact, Covasant, Deepsense, Distyl.ai, Northslope, Quantium, Tribe.ai, Tryolabs, and others, will launch dedicated Gemini Enterprise practices as part of Google’s new Gemini Enterprise transformation program, with sandbox‑development credits, technical‑upskilling support, and referral opportunities.
Early Access and Enterprise‑Ready Agents
Selected partners, including Accenture, BCG, Deloitte, and McKinsey, will receive early access to Gemini models, with their feedback used to refine the systems for real‑world enterprise use.
Under the expanded investment, Google Cloud will help partners surface enterprise‑ready agents in Gemini Enterprise, designed to be deployed within existing governance and security policies. Built on the Gemini Enterprise Agent Platform and discoverable inside the Gemini app, these agents come from companies such as Adobe, Atlassian, Deloitte, Lovable, Oracle, Palo Alto Networks, Replit, S&P Global, Salesforce, ServiceNow, Workday, and others.
Leaders from Google Cloud’s partner network said the funding underscores a shift from isolated AI‑experiments to enterprise‑wide AI‑transformation, enabled by Google’s models, chips, and partner‑delivery infrastructure.
New Chips: TPU 8t and TPU 8i
To underpin this agent‑centric push, Google has unveiled two new custom tensor processing units: the TPU 8t and TPU 8i. The TPU 8t is optimised for training large language models such as those behind Anthropic’s Claude and is designed for pods of 9,600 chips that can scale to 134,000 chips, with Google indicating it can now string together up to 1 million chips for large‑scale training.
The TPU 8i is tuned for fast inference needed by AI agents that must respond in real time. Google claims TPU 8i delivers about 80 per cent better performance than the prior‑generation Ironwood chips for inference‑heavy tasks.
Strategic Positioning in the Cloud‑AI Market
Google Cloud said it has grown its share of the global cloud market to about 14 per cent at the end of 2025, driven by AI‑centric workloads, although it still trails Amazon Web Services and Microsoft Azure. By building a full stack—from models and chips to agent‑orchestration and partner ecosystems—Google aims to become the primary infrastructure layer for AI‑driven innovation rather than a collection of point products.
SMT: Google Cloud Bets On AI Agents As Core Engine Of Enterprise Growth
Meta Description:
Tags:
