A&M Report Flags Governance Gaps as AI Adoption Surges

Alvarez & Marsal (A&M) has released its latest AI Survey Report, offering a detailed view of how Indian enterprises are deploying, governing and securing AI systems. While AI adoption is accelerating across sectors — from BFSI and healthcare to retail and manufacturing — the study makes one thing clear: oversight and risk management are not keeping up.

Only 15 percent of organizations have implemented AI extensively across business units, even as nearly half rely on hybrid models that mix SaaS, OEM and custom-built tools. The report was unveiled at the Annual Information Security Summit (AISS) in New Delhi.

Governance Gaps Undermine Enterprise-Scale AI Deployment

Despite widespread momentum, governance maturity remains weak.

  • 60 percent have introduced basic policy frameworks,

  • yet just 19 percent have completed structured risk assessments,

  • and 81 percent still struggle with visibility into how AI models are monitored and controlled.

With several AI projects developed in silos, standards vary significantly across teams and functions. A&M stresses the need for integrated governance frameworks that assign clear accountability, enforce transparency, and define escalation paths for AI-related risks.

“India’s AI opportunity is substantial, but its long-term gains depend on how effectively organizations govern and secure the systems they deploy,” said Dhruv Phophalia, MD & India Lead – Disputes & Investigations, Alvarez & Marsal.

Responsible AI Adoption Remains Limited

Although most enterprises acknowledge the importance of responsible AI, adoption is sparse:

  • Fewer than 20 percent have deployed bias detection, fairness, or explainability mechanisms.

  • 60 percent lack formal processes to validate model integrity.

  • Only 26 percent embed data masking or PII-scanning in AI pipelines.

These gaps expose models to risks like unfair decision-making, compromised datasets and inconsistent outcomes — especially as AI moves into mission-critical processes.

AI Security Practices Are Not Keeping Pace

Security across the AI lifecycle is emerging as one of the weakest links.

  • Only 30 percent conduct red teaming or penetration testing on AI assets.

  • Just 19 percent have mechanisms to detect data poisoning during model training.

  • Over half rely on basic development environments with limited protection against adversarial threats.

A&M emphasizes the need for containerized training environments, dataset validation, and adversarial testing to build resilience as models grow more autonomous.

“As AI systems become more data-intensive and autonomous, gaps in lifecycle governance have far greater consequences,” said Chandra Prakash Suryawanshi, MD, Alvarez & Marsal.

Deployment Risks Grow as AI Goes Live

Even at deployment, safeguards remain slow to mature.

  • 56 percent conduct security reviews before launch,

  • but only 30 percent protect against prompt injection,

  • and just 19 percent have real-time hallucination monitoring.

Data protection controls are often traditional, lacking automated privacy-preserving mechanisms needed for AI-scale operations.

Weak Monitoring and Compliance Create Blind Spots

Post-deployment oversight is a major concern:

  • 26 percent have no monitoring at all,

  • and 45 percent rely on non-real-time checks.
    Incident response is similarly underdeveloped, with only 15 percent having AI-specific plans.

As AI systems evolve in production, this lack of continuous monitoring increases the risk of drift, failure, compliance breaches and inconsistent outcomes.

India Inc Must Shift From Experimentation to Structured Execution

The report concludes that while India’s enterprise AI adoption is accelerating, most organizations are not yet prepared for the operational, ethical and regulatory risks that come with scale. A&M argues that companies investing early in governance, lifecycle security and real-time oversight will be best positioned to unlock AI’s full economic value.

Latest articles

Related articles