Check Point Software Technologies and Google Cloud have announced a cybersecurity partnership designed to help enterprises secure AI‑agent deployments at scale. Check Point will integrate its AI Defense Plane with Google Cloud’s Gemini Enterprise Agent Platform as a launch partner, combining Google’s agent‑control infrastructure with Check Point’s governance and behavioural‑protection layer. The collaboration is being positioned as a foundational security framework for organisations moving beyond chat‑assistant‑style AI into autonomous agents that invoke tools, query data, and execute workflows.
Both companies said the integration addresses a core problem: as enterprise AI evolves from simple assistants to agents that act on behalf of users, traditional access‑based security controls are no longer sufficient. The focus is shifting from “who has access” to “what AI is allowed to do,” and the Check Point–Google Cloud stack aims to provide visibility, policy‑enforcement, and runtime‑level protection across agent‑driven workflows.
Three‑Layer Architecture for Agentic Security
Check Point described the security model as a three‑layer architecture:
- A control plane for agent identity and connectivity, provided by Google Cloud’s Enterprise Agent Platform.
- A governance layer for policy enforcement, supplied by Check Point.
- A runtime intelligence layer for behavioural protection, also supplied by Check Point, which inspects agent actions in real time.
Under this framework, the AI Defense Plane gains full visibility into the agent estate, automatically inventorying all agents deployed across Google Cloud environments, including their components, tools, and connections via the Google Cloud Model Context Protocol (MCP) servers. Security teams can then define and enforce policies such as allow‑and‑deny lists for MCP servers, tools, and skills, plus agent‑posture rules that flag or block risky configurations.
At runtime, the integration adds inline, context‑aware guardrails through the Agent Gateway. This includes detection and blocking of prompt injection attacks, prevention of sensitive‑data leakage through agent responses and tool actions, and screening of agent tool calls before execution. Check Point framed the goal as ensuring that every agent action is both technically allowed and organisationally acceptable, not just access‑enabled.
Strategic Positioning for AI‑Guarded Workflows
David Haber, VP of AI security at Check Point, said the emerging architecture for agentic security requires this three‑tiered approach because, in AI‑agent systems, access alone does not guarantee the right outcome. Google Cloud’s Vineet Bhan, director of security and identity partnerships, added that the ecosystem is committed to providing an open cloud and that the Check Point partnership will help customers accelerate digital transformation while strengthening operational security.
The integration is expected to roll out broadly in mid‑2026, with early access options for enterprises already scaling Gemini‑based agents. By tying agent‑control, policy‑governance, and real‑time behavioural protection into a single framework, the partnership signals a new standard for AI trust and security in enterprise‑grade AI‑agent environments.
