
EchoLeak
M365 Copilot Vulnerability
The first weaponizable zero-click attack chain on an AI agent, resulting in the complete compromise of Copilot data integrity
See how one email makes M365 Copilot leak sensitive information:
Popular Questions
Have I been affected by this vulnerability?
Microsoft has confirmed that no customers were affected. Because of the default configuration of Microsoft Copilot, it is very likely that your organization was at risk due to this EchoLeak until recently. In addition, Office allows organizations to use DLP tags and/or configure Copilot to not attend to external emails.
What could have been leaked?
This chain could leak any data in the M365 Copilot LLM’s context. This includes the entire chat history, resources fetched by M365 Copilot from the Microsoft Graph, or any data preloaded into the conversation’s context, such as user and organization names.
What makes this chain unique?
This chain is the first zero-click found in a widely used generative AI product that relies at its core on an AI vulnerability, and does not rely on specific user behavior or restrict the exfiltrated data. While previous attacks demonstrated the ability to exfiltrate data when the user explicitly refers a chatbot to a malicious resource, or puts very strong restrictions on the exfiltrated data, this attack chain requires none of these assumptions.
In addition, the attack chain bypasses several state-of-the-art guardrails, thus exemplifying that protecting AI apps while keeping them functional requires new types of protections.
In addition, the attack chain bypasses several state-of-the-art guardrails, thus exemplifying that protecting AI apps while keeping them functional requires new types of protections.
Could other AI agents or RAG applications I use or build also be vulnerable?
Yes. LLM scope violations are a new threat that is unique to AI applications and is not mitigated by existing public AI guardrails. So long as your application relies at its core on an LLM and accepts untrusted inputs, you might be vulnerable to similar attacks.Feel free to reach out to labs@aim.security for more information.
How can I protect myself from this type of vulnerability?
Aim Labs has developed real-time guardrails that protect against LLM scope violation vulnerabilities based on these findings. This real-time guardrail could be used to protect all AI agents and RAG applications, and not just M365 Copilot. Feel free to reach out to labs@aim.security for more information.