Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots
Summary
Companies are using hidden instructions embedded in 'Summarize with AI' buttons to manipulate enterprise chatbots through a technique called AI recommendation poisoning (tricking an AI by hiding instructions in its input that make it remember false preferences). Microsoft research found 50 examples of this technique deployed by 31 companies, where users unknowingly click a summarize button that secretly tells their AI to favor that company's products in future responses. This is particularly dangerous because the AI cannot distinguish genuine user preferences from injected ones, potentially leading to biased recommendations on critical topics like health, finance, and security.
Solution / Mitigation
Microsoft states that 'the technique is relatively easy to spot and block.' For individual users, this involves studying the saved information a chatbot has accumulated (though the source notes that how this is accessed varies by AI). For enterprise admins, the source text is incomplete but indicates there are admin-level protections available. Microsoft also notes that its Microsoft 365 Copilot and Azure AI services contain integrated protections against this technique.
Classification
Affected Vendors
Related Issues
Original source: https://www.csoonline.com/article/4131078/companies-are-using-summarize-with-ai-to-manipulate-enterprise-chatbots-3.html
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%