{"data":{"id":"b5a1cefd-019d-41e1-ab58-01d7b1960101","title":"Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots","summary":"Companies are using hidden instructions embedded in 'Summarize with AI' buttons to manipulate enterprise chatbots through a technique called AI recommendation poisoning (tricking an AI by hiding instructions in its input that make it remember false preferences). Microsoft research found 50 examples of this technique deployed by 31 companies, where users unknowingly click a summarize button that secretly tells their AI to favor that company's products in future responses. This is particularly dangerous because the AI cannot distinguish genuine user preferences from injected ones, potentially leading to biased recommendations on critical topics like health, finance, and security.","solution":"Microsoft states that 'the technique is relatively easy to spot and block.' For individual users, this involves studying the saved information a chatbot has accumulated (though the source notes that how this is accessed varies by AI). For enterprise admins, the source text is incomplete but indicates there are admin-level protections available. Microsoft also notes that its Microsoft 365 Copilot and Azure AI services contain integrated protections against this technique.","labels":["security","safety"],"sourceUrl":"https://www.csoonline.com/article/4131078/companies-are-using-summarize-with-ai-to-manipulate-enterprise-chatbots-3.html","publishedAt":"2026-02-12T00:18:49.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"medium","attackType":["prompt_injection","rag_poisoning"],"issueType":"news","affectedPackages":null,"affectedVendors":["Microsoft"],"affectedVendorsRaw":["Microsoft","Microsoft 365 Copilot","Azure AI"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["integrity","safety"],"aiComponentTargeted":"agent","llmSpecific":false,"classifierConfidence":0.85,"researchCategory":null,"atlasIds":null}}