GHSA-qh6h-p6c9-ff54: LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
Summary
LangChain Core has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories using '../' sequences or absolute paths) in legacy functions that load prompt configurations from files. When an application accepts user-influenced prompt configs and passes them to `load_prompt()` or `load_prompt_from_config()`, attackers can read arbitrary files like secret credentials or configuration files, though they're limited to specific file types (.txt, .json, .yaml).
Solution / Mitigation
Update `langchain-core` to version 1.2.22 or later. The fix adds path validation that rejects absolute paths and '..' traversal sequences by default. Users can pass `allow_dangerous_paths=True` to `load_prompt()` and `load_prompt_from_config()` if they need to load from trusted inputs. Additionally, migrate away from these deprecated legacy functions to the newer `dumpd`/`dumps`/`load`/`loads` serialization APIs from `langchain_core.load`, which don't read from the filesystem and use an allowlist-based security model instead.
Vulnerability Details
EPSS: 0.0%
Yes
March 27, 2026
Classification
Affected Vendors
Affected Packages
Related Issues
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
Original source: https://github.com/advisories/GHSA-qh6h-p6c9-ff54
First tracked: March 28, 2026 at 02:00 AM
Classified by LLM (prompt v3) · confidence: 95%