Large Language Models in Human Subject Research, and the Presence of Idiosyncratic Human Behaviors
inforesearchPeer-ReviewedLLM-Specific
researchsafety
Source: IEEE Xplore (Security & AI Journals)December 22, 2025
Summary
Large language models (LLMs, AI systems trained on huge amounts of text to generate human-like responses) can now mimic not just general human language but also unusual, individual-specific human behaviors. This ability could lead to LLMs being used more widely in research studies and potentially reduce the role of actual humans, which raises concerns about AI alignment (ensuring AI systems behave in ways humans intend and approve of) and how this technology affects society.
Classification
Attack SophisticationModerate
Impact (CIA+S)
safety
AI Component TargetedModel
Original source: http://ieeexplore.ieee.org/document/11311370
First tracked: February 12, 2026 at 02:22 PM
Classified by LLM (prompt v3) · confidence: 85%