Netskope: Manufacturers Embrace Approved AI, Cut Shadow AI Risks

A new report from Netskope Threat Labs shows that the manufacturing sector is rapidly improving its governance of artificial intelligence (AI), transitioning from unmonitored “shadow AI” use to formally approved and regulated platforms. The study highlights how companies are learning to balance innovation with data protection and compliance amid surging adoption of generative AI (GenAI) across global manufacturing operations.

According to the research, 94% of manufacturing organizations detected GenAI usage among their employees. However, what’s notable is the shift in usage patterns—unapproved GenAI tool usage has dropped from 83% in December 2024 to 51% in September 2025, while the use of organization-approved AI platforms has risen from 15% to 42%.

This trend underscores an important transition within the industry: manufacturers are increasingly integrating GenAI into daily workflows—but under structured governance frameworks that safeguard intellectual property, data integrity, and regulatory compliance.

Data Exposure and Shadow AI Still Pose Challenges

Despite the encouraging trend, data exposure risks remain a pressing concern. Netskope found that:

  • 41% of organizations still report violations involving regulated data,

  • 32% involve intellectual property leaks, and

  • 19% involve exposure of passwords or security keys.

One in four incidents (28%) involved source code exposure, often resulting from developers using unmonitored AI tools during coding activities. Such risks highlight how even well-intentioned AI adoption can lead to serious security gaps if not properly managed.

Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs, noted, “The strides made in improving AI governance are bridging the gap between managed and shadow AI. But as AI becomes deeply embedded in manufacturing, maintaining the balance between innovation and security will remain the industry’s greatest challenge.”

Rise of Private AI and Secure Cloud Platforms

The report also highlights a shift toward enterprise-grade private AI systems, with 29% of manufacturers now using secure AI platforms such as Microsoft’s Azure OpenAI, Amazon Bedrock, and Google Vertex AI. These platforms provide governance, encryption, and usage control features that consumer GenAI tools lack.

However, risks continue to arise from personal cloud app usage on work devices—such as Google Drive, LinkedIn, and OneDrive—where corporate data often mixes with personal accounts. Netskope reported that 18% of manufacturers detected malware downloads from Microsoft OneDrive, 14% from GitHub, and 11% from Google Drive each month, further complicating data security oversight.

Balancing Innovation and Oversight

As the manufacturing industry accelerates AI adoption, experts stress the importance of embedding AI security, data loss prevention (DLP), and user training into operational policies. The report recommends that companies establish clear usage boundaries, employee training programs, and visibility tools to manage both approved and unapproved AI interactions.

Cutolo added, “As AI becomes integral to production, manufacturers must evolve from reactive security to proactive governance. With the right safeguards, innovation and security can advance hand in hand.”

Latest articles

Related articles