Zscaler ThreatLabz 2026 AI Security Report shows India emerging as a frontline adopter of enterprise AI/ML—and, at the same time, as a test case for how quickly AI‑driven cyber risk can escalate. Between June and December 2025, Indian enterprises generated 82.3 billion AI/ML transactions, ranking second globally after the US and accounting for 46.2 per cent of AI/ML activity among APAC countries. Over that period, India recorded a 309.9 per cent year‑over‑year increase in AI/ML use, underscoring how deeply AI has been embedded into daily business operations.
AI Adoption Outpaces Governance and Visibility
The report, based on analysis of nearly one trillion AI/ML transactions across the Zscaler Zero Trust Exchange in 2025, notes that AI usage now spans virtually every business function. In India, transaction volumes are led by Technology and Communication (31.3 billion transactions), followed by Manufacturing (15.7 billion), Services (12.6 billion) and Finance & Insurance (12.2 billion). This pattern reflects both the country’s role as a global technology and services hub and the speed at which AI is being built into core workflows, from software development to customer support and risk analytics.
However, the same data highlights a persistent governance gap. Many organisations still lack a basic inventory of where AI is running inside their environment—across standalone tools, embedded features in SaaS applications, and custom models—leaving them uncertain about which systems are processing sensitive data.
Zscaler’s experts note that AI has shifted from being a discrete “productivity layer” to a primary vector for autonomous, machine‑speed attacks, with crimeware and nation‑state actors using AI to accelerate reconnaissance, exploitation and lateral movement. In this context, the absence of clear AI usage maps and data‑flow visibility becomes a critical weakness rather than a minor oversight.
Machine-Speed Compromise: Median 16 Minutes to Critical Failure
Red‑team testing described in the report indicates that most enterprise AI systems are currently fragile when subjected to adversarial pressure. In controlled scans, critical vulnerabilities surfaced within minutes: the median time to first critical failure was just 16 minutes, 90 per cent of systems were compromised in under 90 minutes, and in the most extreme case, defenses were bypassed in a single second.
As agentic AI—systems capable of semi‑autonomous decision-making—gains ground, ThreatLabz expects cyberattacks to become increasingly automated, with AI agents handling tasks such as credential harvesting, privilege escalation and data exfiltration without direct human control.
This dynamic is reinforced by the broader surge in AI/ML traffic. Across the ecosystem, AI/ML activity increased 91 per cent year‑over‑year, spanning more than 3,400 applications. Many enterprises do not yet have a consolidated view of the AI models and services interacting with their data or the third‑party supply chains behind them. Weaknesses in shared model files or common libraries can be leveraged by attackers as pivot points into high‑value systems, turning the AI supply chain into a primary target.
Embedded AI and Data Exposure at Scale
The report distinguishes between “standalone AI” platforms and “embedded AI” features built directly into everyday SaaS applications. Standalone tools such as ChatGPT and Codeium account for a massive volume of activity—115 billion and 42 billion transactions respectively in 2025—but embedded AI is described as one of the fastest-growing sources of unmanaged risk. Because many embedded features are enabled by default and may not be visible to legacy security controls, they create back channels through which sensitive corporate data can flow into third‑party AI models without explicit approval.
Atlassian is cited as a leading source of embedded AI activity, reflecting widespread uptake of AI‑powered capabilities in tools such as Jira and Confluence. More broadly, enterprise data transfers to AI/ML applications reached 18,033 terabytes in 2025, a 93 per cent year‑over‑year rise. Platforms such as Grammarly (3,615 TB) and ChatGPT (2,021 TB) have effectively become dense repositories of corporate knowledge, from communications and documents to code and customer data. Zscaler reports 410 million data loss prevention policy violations linked to ChatGPT alone, including attempts to share identifiers, source code and medical records, highlighting how quickly data governance issues have shifted from theoretical concern to operational reality.
Zero Trust as the Baseline for AI-Native Security
Against this backdrop, Zscaler argues that legacy perimeter security—built around firewalls and VPNs—is ill‑suited to dynamic AI environments that span cloud services, embedded features and distributed workforces. Its recommended baseline centres on a Zero Trust architecture that continuously verifies identity and context, minimises exposed attack surface, and inspects all traffic (including encrypted flows) for threats and sensitive data movement.
Key elements of this approach include discovering and classifying where sensitive data resides and how it moves into AI tools; enforcing least‑privilege access to applications and models; using segmentation to neutralise lateral movement; and applying AI‑driven analytics to speed detection and response. For Indian enterprises that now sit near the top of global AI adoption charts, the report’s message is clear: AI‑driven transformation has reached a point where security must itself become AI‑native, with continuous visibility and control at the same machine speed at which threats evolve.
