LLM-generated passwords are indefensible. Your codebase may already prove it
Summary
Research from Irregular and Kaspersky shows that all frontier LLMs (large language models, AI systems trained on massive amounts of text) generate passwords that are structurally predictable and much weaker than they appear. When Claude Opus 4.6 was asked to generate passwords 50 times, only 30 distinct passwords emerged, with one password repeating 36% of the time, proving the model retrieves patterns from training data rather than creating truly random passwords. The core problem is architectural: LLMs assign high probability to the most plausible next character based on patterns they learned (like uppercase letters at the start), while cryptographic systems (secure random number generators) must give every character equal probability, making LLM-generated passwords vulnerable to attackers who understand how these models work.
Classification
Affected Vendors
Related Issues
Original source: https://www.csoonline.com/article/4155166/llm-generated-passwords-are-indefensible-your-codebase-may-already-prove-it.html
First tracked: April 8, 2026 at 08:01 AM
Classified by LLM (prompt v3) · confidence: 92%