CVE-2023-32786: In Langchain through 0.0.155, prompt injection allows an attacker to force the service to retrieve data from an arbitrar
Summary
CVE-2023-32786 is a prompt injection vulnerability (tricking an AI by hiding instructions in its input) in Langchain version 0.0.155 and earlier that allows attackers to force the service to retrieve data from any URL they choose. This could lead to SSRF (server-side request forgery, where an attacker makes a server request data from unintended locations) and potentially inject harmful content into tasks that use the retrieved data.
Vulnerability Details
7.5(high)
EPSS: 0.1%
Classification
Taxonomy References
Affected Vendors
Related Issues
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
Original source: https://nvd.nist.gov/vuln/detail/CVE-2023-32786
First tracked: February 15, 2026 at 08:35 PM
Classified by LLM (prompt v3) · confidence: 95%