OpenAI has removed accounts from China and North Korea that were allegedly using its technology for malicious activities such as surveillance and opinion-influence operations
the company announced on Friday. According to OpenAI, these activities highlight how authoritarian regimes could exploit AI against the U.S. and their own populations. The company used AI tools to detect and disrupt these operations but did not specify the number of accounts banned or the timeframe of the action.
In one case, users leveraged ChatGPT to generate Spanish-language news articles that portrayed the United States negatively. These articles were later published by mainstream Latin American news outlets under the byline of a Chinese company. In another instance, malicious actors with potential links to North Korea used AI to create fraudulent resumes and online profiles for fictitious job applicants, aiming to secure employment at Western companies. Additionally, a separate network of accounts, likely connected to a financial fraud scheme based in Cambodia, utilized OpenAI’s technology to translate and generate comments across social media and communication platforms such as X and Facebook.
The U.S. government has raised concerns about China’s alleged use of AI to suppress dissent, spread misinformation, and threaten the security of the U.S. and its allies. OpenAI’s ChatGPT remains the most widely used AI chatbot, with over 400 million weekly active users. The company is currently in talks to raise up to $40 billion at a $300 billion valuation, which could set a record for a single funding round by a private company.