OpenAI Shuts Down ChatGPT Accounts Linked to Cyber Threats
OpenAI has disclosed that it terminated several ChatGPT accounts connected to Russian-speaking cybercriminals and two Chinese state-affiliated hacking groups. These accounts were reportedly used to assist in creating malware, automating social media content, and researching U.S. satellite communication systems.
According to OpenAI's threat intelligence report, the Russian-speaking group used ChatGPT to help design and refine Windows malware, debug code in different programming languages, and establish command-and-control infrastructure. The group showed familiarity with Windows internals and took steps to maintain operational security.
OpenAI named the malware campaign "ScopeCreep" and stated that the activity was not widespread. The threat actor created multiple temporary email accounts, each used to have a single conversation that contributed to the malware's development. After using each account once, they discarded it and opened a new one. This approach showed a strong focus on avoiding detection.
The attackers shared their AI-assisted malware through a public code repository. It disguised itself as a legitimate video game tool called Crosshair X. When users downloaded the tampered software, it installed a malware loader that fetched additional payloads from an external server.
The malware was designed to follow a multi-step process: gain administrative privileges, remain hidden, alert the attacker, and steal sensitive data. It attempted to avoid detection by excluding itself from Windows Defender using PowerShell, suppressing pop-up windows, and using time delays. Other techniques included Base64 encoding, DLL side-loading, and SOCKS5 proxies to hide the attacker's IP address.
The malware’s primary goal was to steal credentials, cookies, and tokens from web browsers, and send the data to a Telegram channel controlled by the attackers. OpenAI reported that its models were used to debug Go code, integrate Telegram APIs, and modify Windows Defender settings through PowerShell.
In addition to this group, OpenAI also deactivated accounts linked to Chinese hacking groups APT5 and APT15. These groups used ChatGPT for open-source research, technical troubleshooting, and support activities like software development and Linux system administration. They used the tool to build offline software packages, configure firewalls, set up name servers, and manage Android and web applications.
Some of the more concerning behaviors included efforts to develop a brute-force script for breaking into FTP servers, research automation methods for penetration testing, and manage Android devices for posting or liking content across major platforms such as Facebook, Instagram, TikTok, and X.
Other examples of malicious use included:
- A North Korea-linked network that used ChatGPT to generate job application materials for remote IT and software roles, likely as part of fraudulent employment campaigns.
- A China-linked operation known as Sneer Review, which created social media posts in English, Chinese, and Urdu about geopolitics for Facebook, Reddit, TikTok, and X.
- A campaign from the Philippines named Operation High Five that generated short political comments in English and Taglish for Facebook and TikTok.
- Operation VAGue Focus, a China-origin effort that generated social media content posing as journalists and analysts. It also translated messages from Chinese to English, likely for use in phishing or social engineering.
- Operation Helgoland Bite, likely from Russia, created Russian-language content criticizing the U.S. and NATO, and discussing the 2025 German election on Telegram and X.
- Operation Uncle Spam, a Chinese campaign that produced divisive U.S. political content for social platforms like Bluesky and X.
- Storm-2035, an Iranian influence operation, used ChatGPT to support causes such as Latino rights, Scottish independence, and Palestinian rights, and praised Iran’s military. These messages were shared through fake accounts on X.
- Operation Wrong Number, likely from Cambodia and associated with China-based task scam groups, used ChatGPT to create multilingual job scam messages. These messages offered high pay for simple online tasks like liking posts.
OpenAI researchers Ben Nimmo, Albert Zhang, Sophia Farquhar, Max Murphy, and Kimo Bumanglag explained that these scams often charged new recruits large fees and used those payments to pay earlier victims, creating a pyramid-like structure.
Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post.