A newly discovered vulnerability called 'EchoLeak' is the first zero-click AI flaw known to
allow attackers to extract sensitive information from Microsoft 365 Copilot without requiring any user interaction.
The vulnerability was uncovered by researchers at Aim Labs in January 2025. They promptly reported it to Microsoft, which assigned it the identifier CVE-2025-32711, labeled it a critical information disclosure issue, and implemented a server-side fix in May. Users do not need to take any action.
Microsoft also confirmed that there is no evidence of this flaw being used in real-world attacks, meaning no customers were affected.
Microsoft 365 Copilot is an AI-powered assistant integrated into Office applications such as Word, Excel, Outlook, and Teams. It relies on OpenAI's GPT models and Microsoft Graph to help users generate content, analyze data, and answer questions using their organization's internal documents, emails, and chats.
Although EchoLeak has been resolved and was never exploited maliciously, it highlights a new type of vulnerability known as 'LLM Scope Violation.' This occurs when a large language model unintentionally reveals internal data without user direction or awareness.
Because the attack does not require user interaction, it can be automated for silent data extraction in enterprise settings. This demonstrates the serious risks associated with vulnerabilities in AI-enhanced systems.
How EchoLeak Works
The process begins when an attacker sends a malicious email to the target. The message appears to be a typical business communication but contains a hidden prompt injection that directs the language model to gather and leak sensitive internal data.
The deceptive prompt is carefully crafted to resemble a normal human message, which allows it to bypass Microsoft's protection mechanisms such as the cross-prompt injection attack (XPIA) classifier.
When the user later asks Copilot a related business question, the system's Retrieval-Augmented Generation (RAG) engine includes the email in the AI's prompt context because of its formatting and apparent relevance.
At this point, the injected prompt reaches the language model and causes it to extract sensitive data, embedding it into a link or image.
Aim Labs discovered that certain markdown image formats trigger the browser to request the image, which in turn sends the embedded data to the attacker's server via the image URL. While Microsoft’s Content Security Policy blocks most external domains, trusted platforms like Microsoft Teams and SharePoint can be exploited to bypass this restriction and exfiltrate the data.
Although EchoLeak has been patched, the growing complexity and deeper integration of AI tools in business processes are straining existing security measures.
This evolution is likely to produce more exploitable flaws that adversaries can use for impactful attacks. To mitigate these risks, organizations should enhance their prompt injection filters, define strict input boundaries, and apply filters to the model's output to prevent responses containing external links or structured data.
Additionally, RAG engines can be configured to ignore external sources, which helps prevent malicious prompts from being retrieved.
Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post.