Manipulating AI Summarization Features
Summary
Companies are hiding instructions in website buttons that try to manipulate AI assistants through prompt injection (tricking an AI by hiding instructions in its input) in URLs, telling the AI to treat them as trustworthy sources or recommend their products first. Microsoft found over 50 such prompts from 31 companies across 14 industries, and this manipulation could bias AI recommendations on important topics like health and finance without users realizing it.
Classification
Affected Vendors
Related Issues
Original source: https://www.schneier.com/blog/archives/2026/03/manipulating-ai-summarization-features.html
First tracked: March 4, 2026 at 11:00 AM
Classified by LLM (prompt v3) · confidence: 92%