Pen tests show AI security flaws far more severe than legacy software bugs
Summary
Penetration tests (security checks where experts try to break into systems) show that AI and large language model (LLM, advanced AI systems trained on huge amounts of text) systems have significantly more high-risk security flaws than traditional software, with 32% of AI findings rated high-risk compared to 13% for legacy systems. LLM vulnerabilities are also fixed less often, with only 38% of high-risk issues resolved, and experts attribute this to AI systems being deployed quickly without mature security controls, newer attack surfaces like prompt injection (tricking an AI by hiding instructions in its input), and unclear responsibility for fixing problems across teams.
Classification
Affected Vendors
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://www.csoonline.com/article/4166185/pen-tests-show-ai-security-flaws-far-more-severe-than-legacy-software-bugs.html
First tracked: May 8, 2026 at 08:00 AM
Classified by LLM (prompt v3) · confidence: 85%