OpenAI has removed accounts linked to China and North Korea that were allegedly engaging in malicious activities such as surveillance, opinion-influence operations, and fraudulent job applications, the company stated in a report on Friday. The ChatGPT maker emphasized that authoritarian regimes could exploit AI against both foreign adversaries and their own populations.
AI-Powered Misinformation and Fraudulent Activities
OpenAI did not specify how many accounts were banned or the timeframe of the enforcement. However, the report outlined several instances of AI misuse detected by its internal monitoring tools.
One case involved Chinese-linked entities generating Spanish-language news articles using ChatGPT, which were then published by mainstream media outlets in Latin America. These articles were reportedly designed to denigrate the United States and influence public opinion.
Another instance pointed to North Korean operatives using AI-generated résumés and online profiles to create fake job applicants. The goal was to fraudulently secure employment at Western companies, raising concerns about potential security risks and financial fraud.
Additionally, a Cambodia-based financial fraud network used OpenAI’s technology to translate and mass-generate comments across social media platforms such as X (formerly Twitter) and Facebook, likely in an attempt to manipulate online discussions.
U.S. Concerns Over AI Misuse by Authoritarian Regimes
The U.S. government has raised serious concerns about China’s use of AI, citing potential applications for domestic surveillance, misinformation campaigns, and geopolitical influence operations. Washington has also accused North Korea of cyber-enabled financial fraud, which includes AI-generated scams to evade international sanctions.
Also read: Satya Nadella Showcases AI’s Role in Farming, Elon Musk Reacts
With ChatGPT surpassing 400 million weekly active users, OpenAI continues to face growing pressure to monitor and prevent misuse of its technology. The company is currently in talks to raise up to $40 billion in funding, which would value it at an unprecedented $300 billion, making it one of the most valuable private companies in history.
As AI adoption accelerates, the challenge of preventing state-sponsored exploitation and safeguarding digital security remains a critical focus for AI developers and global policymakers. OpenAI’s latest crackdown signals a more aggressive stance on AI governance, reinforcing the need for robust safeguards against the misuse of generative AI technologies.