A landmark ISACA survey of over 3,400 digital trust professionals worldwide has laid bare a troubling reality in corporate AI deployment: 56% cannot specify how quickly their organisations could halt an AI system in the event of a security incident. Unveiled at RSA Conference 2026, the AI Pulse Poll—covering IT audit, governance, cybersecurity, privacy, and emerging technology roles—reveals enterprises racing to implement AI without matching governance frameworks, human oversight protocols, or clear accountability chains. Amid regulatory pressures like the EU AI Act and rising AI‑driven threats, the findings signal a “blind spot” that could expose firms to catastrophic disruptions.
Uncertainty Dominates AI Kill Switch Capabilities
When pressed on shutdown timelines, responses paint a fragmented picture: 32% believe they could intervene within 60 minutes, while 7% concede it would exceed an hour. Confidence in incident investigation fares worse—43% express high assurance in explaining AI failures to leadership or regulators, with 27% admitting low or no faith in their processes. Accountability remains equally murky: boards and executives top the list at 28%, followed by CIO/CTO (18%) and CISO (13%), but 20% simply don’t know who bears ultimate responsibility.
“AI adoption outpaces governance maturity,” said Jenai Marinkovic, vCISO at Tiro Security and ISACA Emerging Trends Working Group member. “Enterprises must embed guardrails—people, policies, processes—before crises hit.”
Human Oversight and Disclosure Lag Behind Deployment
The poll underscores lax supervision of AI actions: only 36% require human approval for most outputs before execution, 26% conduct selective post‑run reviews, 11% intervene solely on alerts, and 20% lack visibility into oversight practices. Disclosure fares worse—18% enforce AI use notification for work products, 20% mandate but inconsistently apply it, and 32% have no requirements at all.
This comes as AI/ML (62%) and generative AI (59%) dominate 2026 priorities per ISACA’s Tech Trends poll, yet half of leaders deem themselves “somewhat prepared” for risks, 25% “not very.” Social engineering via AI emerges as the top cybersecurity threat.
Regulatory Storm Looms Over Unprepared Enterprises
As frameworks like the EU AI Act demand transparency and redress mechanisms, ISACA’s data spotlights urgency. Boards face liability for unchecked deployments; CISOs grapple with opaque systems lacking auditable trails. Evangelist Rob Clyde warns of “unprecedented change with few rules,” urging agility ahead of stricter mandates.
For digital trust leaders, the poll demands action: define kill switches, clarify C‑suite roles, enforce human‑in‑loop for high‑stakes decisions, and build disclosure into workflows. Full results release in May, but the message rings clear—governance must catch deployment.
