Poisoning AI Training Data
Summary
A researcher demonstrated how easily AI systems can be manipulated by creating false information on a personal website, which major chatbots like Google's Gemini and ChatGPT then repeated as fact within 24 hours, showing that AI training data poisoning (deliberately adding fake information to the data used to teach AI models) is a serious problem because it's so simple to execute.
Classification
Affected Vendors
Related Issues
Original source: https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html
First tracked: February 25, 2026 at 11:00 AM
Classified by LLM (prompt v3) · confidence: 92%