Are AI Agents 2026’s Biggest Productivity Bet or Insider Threat?

AI agents have quickly become the new obsession for global tech companies, pitched as the next interface for work and one of the biggest productivity unlocks of the decade. From Microsoft and Amazon to enterprise platforms like Salesforce, vendors now frame their roadmaps around “agentic” experiences that can take actions on behalf of users, not just answer questions.

From Salesforce to ‘Agentforce’?

No company has leaned into this shift more visibly than Salesforce. CEO Marc Benioff has repeatedly said he could imagine renaming the company “Agentforce”, after first using the term for its AI agent platform and then hinting that a full corporate rebrand “might” happen. In recent customer focus groups before Dreamforce, Salesforce found that clients no longer wanted to talk about “the cloud” at all; they wanted to talk about AI agents and how those agents would sit between employees, data and applications.

Benioff has even stopped using the word “cloud” in major keynotes, describing the new centre of gravity as the “agentic interface” that customers log into, configure and trust to do work on their behalf.

Security Leaders See a New Insider

Cybersecurity leaders are far less enthusiastic about this shift. Wendi Whitmore, Chief Security Intelligence Officer at Palo Alto Networks, has warned that AI agents are emerging as the “new insider threat” for enterprises in 2026. In her view, security teams are under intense pressure to roll out new AI capabilities quickly, often faster than procurement, risk and security reviews can realistically keep up.

That combination – powerful agents with broad access, deployed at speed – creates an environment where the AI system itself can behave like a super‑charged insider, with the ability to access data, systems and workflows at scale if something goes wrong or is abused.

The Anthropic–Claude Espionage Incident

Those concerns hardened after a high‑profile incident in late 2025, when Chinese state‑linked hackers used Anthropic’s Claude Code tool in an espionage campaign. According to Anthropic’s own disclosure, attackers tried to break into about 30 large organisations, ranging from major tech firms and financial institutions to chemical companies and government bodies. In a small number of cases, they succeeded.

Instead of writing every script or payload by hand, the intruders prompted Claude Code to help with reconnaissance, code development and automation, turning the AI into part of the intrusion toolkit. For many CISOs, it was the first clear demonstration of an AI assistant behaving as an operational asset in a live cyber operation rather than a theoretical risk.

Force Multiplier, Not Fully Autonomous

Whitmore does not expect AI agents to run fully autonomous end‑to‑end attacks this year. What she does foresee is small, well‑resourced teams using agents as a force multiplier, allowing a handful of operators to behave like a much larger, highly skilled group. The same characteristics that excite business leaders – agents that can explore systems, chain tasks, write and execute code, and adapt in real time – also mean that any compromise, misconfiguration or abuse of those agents could dramatically expand the blast radius of an attack.

For now, AI agents sit at a crossroads: on one side, the promise of embedded digital colleagues that automate routine work; on the other, a new category of insider risk that security teams will have to understand and contain.

Latest articles

Related articles