CVE-2025-46059: langchain-ai v0.3.51 was discovered to contain an indirect prompt injection vulnerability in the GmailToolkit component.
Summary
LangChain AI version 0.3.51 contains an indirect prompt injection vulnerability (a technique where attackers hide malicious instructions in data like emails to trick AI systems) in its GmailToolkit component that could allow attackers to run arbitrary code through crafted emails. However, the supplier disputes this, arguing the actual vulnerability comes from user code that doesn't follow LangChain's security guidelines rather than from LangChain itself.
Vulnerability Details
9.8(critical)
EPSS: 0.2%
Classification
Affected Vendors
Related Issues
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
Original source: https://nvd.nist.gov/vuln/detail/CVE-2025-46059
First tracked: February 15, 2026 at 08:35 PM
Classified by LLM (prompt v3) · confidence: 85%