OpenAI has announced the banning of multiple accounts that were abusing its ChatGPT tool for malicious activities
including the development of an AI-powered surveillance tool suspected to originate from China. The tool, reportedly powered by Meta's Llama AI model, was designed to monitor anti-China protests in the West and relay real-time reports to Chinese authorities. The actors behind this tool used ChatGPT to generate detailed descriptions, analyze documents, and process images some of which included Uyghur rights protest announcements. It remains unclear whether these images were authentic.
In addition to disrupting this surveillance operation, OpenAI disabled several other clusters of accounts engaged in various cyber threats and disinformation campaigns:
- Deceptive Employment Scheme: North Korean-linked accounts used ChatGPT to fabricate job applications, create personal documentation, and craft convincing responses to evade detection, targeting platforms like LinkedIn.
- Sponsored Discontent: A network believed to be of Chinese origin produced English-language social media content and Spanish-language articles critical of the U.S., later published on Latin American news websites. Some of this activity overlaps with Spamouflage, a known Chinese influence operation.
- Romance-Baiting Scam: A Cambodian-origin network used AI to translate and generate social media comments for fraud schemes involving romance and investment scams, mainly on Facebook, X, and Instagram.
- Iranian Influence Nexus: Accounts generated pro-Palestinian, pro-Hamas, and anti-Israel content, which was distributed via Iranian influence networks like the International Union of Virtual Media (IUVM) and Storm-2035. Some accounts were linked to both operations, revealing an unreported connection between them.
- Kimsuky and BlueNoroff: North Korean threat actors used AI to research cyber intrusion tools and cryptocurrency topics, as well as to debug code for Remote Desktop Protocol (RDP) brute-force attacks.
- Youth Initiative Covert Influence Operation: This network produced English-language articles and social media comments related to the Ghana presidential election, aiming to influence public perception.
- Task Scam: Cambodian-linked actors used ChatGPT to translate between Urdu and English, supporting a scam that tricked users into performing fake online tasks, such as liking videos, in exchange for commissions that never materialized.
The crackdown highlights the growing misuse of AI tools in cybercrime, disinformation, and surveillance. Google’s Threat Intelligence Group (GTIG) recently reported that at least 57 threat actors tied to China, Iran, North Korea, and Russia have abused AI models to enhance their attack strategies. These include using AI to generate phishing content, improve malware, and translate disinformation materials for global audiences.
OpenAI emphasized that collaboration between AI companies, social media platforms, cybersecurity firms, and researchers is critical in countering such threats. The company also codenamed this campaign "Peer Review" due to its focus on promoting and refining surveillance tools.
Among the key findings, OpenAI flagged a case where ChatGPT was used to modify and debug source code suspected to power the "Qianyue Overseas Public Opinion AI Assistant", a tool for monitoring social media posts across X, Facebook, YouTube, Instagram, Telegram, and Reddit. Additionally, the cluster used AI to analyze documents related to U.S. think tanks and political figures in Australia, Cambodia, and the U.S.
The incident underscores the escalating use of AI in cyber threats and foreign influence operations, reinforcing the need for robust AI security measures and responsible development practices.