Human Trust of AI Agents
infonewsLLM-Specific
researchsafety
Source: Schneier on SecurityApril 16, 2026
Summary
Researchers studied how humans behave when playing strategic games (like a guessing game where players try to guess 2/3 of the average guess) against AI language models (LLMs) versus other humans. They found that people choose much lower numbers when playing against LLMs, especially people who are good at strategic thinking, because they believe LLMs will reason carefully and cooperate fairly rather than try to win.
Classification
Attack SophisticationModerate
Impact (CIA+S)
safety
AI Component TargetedModel
Affected Vendors
Monthly digest — independent AI security research
Original source: https://www.schneier.com/blog/archives/2026/04/human-trust-of-ai-agents.html
First tracked: April 16, 2026 at 08:00 AM
Classified by LLM (prompt v3) · confidence: 85%