Enterprises Accelerate GRC Investments Amid Widening AI Governance Risks: Optro

Rapid AI adoption across global enterprises has outpaced the development of robust governance frameworks, exposing organisations to heightened operational and compliance vulnerabilities. A comprehensive Optro study underscores this oversight gap, revealing fragmented accountability and risky employee behaviours as primary concerns, even as business leaders commit to substantial investments in governance, risk, and compliance (GRC) solutions. These dynamics signal a strategic imperative for enterprises to integrate continuous AI oversight into core operations.

Fragmented Oversight Amplifies Deployment Risks

Organisations report AI as central to 85% of their business strategies, embedded in core operations or spanning multiple functions, yet governance maturity lags significantly behind deployment speed. Oversight responsibility remains diffused, with IT accountable for only 25% of AI governance, risk management at 18%, cross-functional committees at 17%, and dedicated AI teams at a mere 10%. This scattering extends to incident response, split among risk/compliance/audit teams (29%), executive leadership (27%), and IT/engineering (24%), often lacking a unified authority or operational “kill switch” to halt problematic systems.

Employee interactions with AI tools emerge as the foremost risk vector, cited by 34% of respondents, surpassing inadequate training (21%) and delivery pressures (21%). Over the past year, 40% of organisations encountered inaccurate AI outputs, 33% faced policy violations, and 28% dealt with customer complaints tied to AI systems. Such incidents highlight how fragmented structures undermine strategic control, potentially eroding trust in AI-driven decision-making and amplifying regulatory exposure in dynamic enterprise environments.

Investment Surge Targets Integrated AI Controls

In response to these gaps, nearly 75% of organisations plan to expand GRC budgets, prioritising AI governance solutions (43%), regulatory compliance tools (41%), and GRC platform enhancements (38%). This commitment reflects recognition that AI risks stem not from models alone but from unmanaged human-AI interactions and siloed oversight.

The shift demands evolving governance from reactive measures to a continuous, integrated discipline aligned with agentic AI workflows. Enterprises must consolidate authority, invest in training, and deploy automated controls to mitigate inaccuracies and violations, ensuring AI supports rather than jeopardises business resilience.

Latest articles

Related articles