62% of Firms Hit by Deepfake Attacks in GenAI Era: Gartner

A new Gartner survey has revealed that 62% of organizations worldwide experienced a deepfake-related cyberattack in the past 12 months, with nearly one-third facing direct attacks on their AI infrastructure. As generative AI adoption accelerates, these threats are becoming increasingly common and more sophisticated.

The survey, conducted between March and May 2025, included 302 cybersecurity leaders across North America, EMEA, and Asia-Pacific. It found that 29% of enterprises had been targeted through their GenAI application infrastructure, while 32% faced prompt injection attacks—a tactic that manipulates AI chatbots into generating harmful or misleading content.

Deepfakes and Prompt Exploits Go Mainstream

Gartner’s findings suggest that deepfake technology is no longer just a fringe threat—it has entered the mainstream enterprise threat landscape. These attacks often involve impersonating executives or manipulating automated processes to extract sensitive information or commit fraud.

Meanwhile, prompt-based attacks on large language models (LLMs) and multimodal AI systems are gaining traction. Attackers use adversarial prompts to manipulate AI assistants into producing biased, malicious, or false outputs.

“Phishing, social engineering, and deepfakes have become standard GenAI threats,” said Prashast Gupta, Director Analyst at Gartner. “And now, attacks targeting the very architecture of AI systems—like model manipulation and infrastructure compromise—are on the rise.”

Rethinking Cybersecurity for the GenAI Era

The growing attack surface has prompted 67% of security leaders to acknowledge the need for new approaches. However, Gartner cautions against knee-jerk overhauls.

“Rather than make sweeping changes or isolated investments, organizations should strengthen core cybersecurity controls and apply targeted measures for GenAI-specific risks,” Gupta said.

These include:

  • Robust identity management and access controls

  • Prompt filtering and input validation

  • Monitoring for AI model drift or abnormal output patterns

  • Red teaming and adversarial testing for LLMs and chatbot systems

Strategic Implications

Gartner’s research highlights the strategic vulnerability of AI-driven enterprises. While GenAI unlocks massive productivity and innovation gains, it also introduces non-traditional attack vectors that legacy systems may not detect.

As more companies embed AI assistants, copilots, and autonomous agents into workflows, they will need to re-evaluate governance, risk management, and compliance frameworks to keep pace.

From training staff on prompt security to implementing technical guardrails within AI systems, the future of enterprise cybersecurity will increasingly revolve around how well organizations adapt to the unique threat landscape of generative AI.

Latest articles

Related articles