CVE-2023-29374: In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via
Summary
CVE-2023-29374 is a vulnerability in LangChain versions up to 0.0.131 where the LLMMathChain component is vulnerable to prompt injection attacks (tricking an AI by hiding instructions in its input), allowing attackers to execute arbitrary code through Python's exec method. This is a code execution vulnerability that could allow an attacker to run malicious commands on a system running the affected software.
Solution / Mitigation
A patch is available at https://github.com/hwchase17/langchain/pull/1119
Vulnerability Details
9.8(critical)
EPSS: 4.5%
Classification
Affected Vendors
Related Issues
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
Original source: https://nvd.nist.gov/vuln/detail/CVE-2023-29374
First tracked: February 15, 2026 at 08:34 PM
Classified by LLM (prompt v3) · confidence: 95%