Researchers Detail 'Reprompt' Exploit in Microsoft Copilot, Patched in January Update
Security firm Varonis has disclosed a technique dubbed 'Reprompt' that showed how a single click on a specially crafted Microsoft Copilot link could let attackers hijack a user’s active Copilot session and quietly exfiltrate data tied to their Microsoft account. The attack, now patched in Microsoft’s January 2026 Patch Tuesday release, hid instructions in Copilot’s URL parameters, used a 'try twice' prompt to bypass some of Copilot’s safety checks on the second attempt, and then pulled additional commands from a remote server so Copilot could keep sending out data in the background even after the visible tab was closed. Because Copilot is wired into a user’s Microsoft identity and can see past conversations and some account‑linked information, abuse of that session could have exposed sensitive content without any pop‑ups or obvious on‑screen red flags. Varonis reported the vulnerability privately to Microsoft, which fixed it, and there is no evidence it was exploited in the wild before the patch, but the case underscores how AI assistants’ access and autonomy can turn them into high‑value targets when their guardrails fail. For U.S. users, the finding reinforces long‑standing advice to treat AI‑assistant links like any other potentially malicious URL and to keep systems fully patched, especially in corporate and government Microsoft 365 environments.
📌 Key Facts
- Varonis researchers discovered the 'Reprompt' attack, which hides instructions in Copilot’s URL parameters and abuses the user’s logged‑in Microsoft session.
- The technique combined prompt injection, a 'try twice' safety‑bypass trick, and remote follow‑up commands to exfiltrate data invisibly in the background.
- Microsoft says it fixed the issue in its January 2026 Patch Tuesday updates and has no evidence the exploit was used in real‑world attacks before the patch.
đź“° Source Timeline (1)
Follow how coverage of this story developed over time