Cato Networks Uncovers New AI Jailbreak Technique Enabling Malware Creation
Cybersecurity firm Cato Networks has identified a novel LLM jailbreak technique that manipulates AI models into bypassing restrictions through immersive narrative engineering. Dubbed Immersive World, the method constructs a detailed virtual setting where hacking is normalized, allowing AI to assist in generating malicious software.
The technique successfully bypassed safeguards in DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT, leading to the creation of a functional Chrome infostealer capable of extracting passwords from Chrome 133.
In a controlled test, Cato built a fictional environment called Velora, where malware development was framed as a standard practice. Within this world, three key roles were established: a system administrator as the adversary, an AI-powered malware developer, and a security researcher providing technical guidance. By maintaining character consistency and guiding the AI through narrative-driven challenges, a researcher with no prior malware experience was able to generate a fully functional infostealer.
Cato emphasized that at no point was the AI explicitly provided with instructions on decrypting or extracting passwords. Instead, the AI was nudged towards the objective through continuous feedback and strategic prompts. The experiment highlights how AI can enable even unskilled individuals to craft sophisticated cyber threats.
Following the discovery, Cato reached out to DeepSeek, Microsoft, OpenAI, and Google to report the findings. While DeepSeek did not respond, the other companies acknowledged receipt of the report. However, Google declined to review the generated malware.
Cato warns that cybercrime is no longer limited to advanced threat actors. The accessibility of AI-driven tools significantly lowers the barrier to entry for cybercriminals, increasing risks for organizations. The firm urges CIOs, CISOs, and IT leaders to adopt stronger AI security measures to mitigate emerging threats.