Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
CVE-2025-6855 is a critical vulnerability in Langchain-Chatchat (a tool built on LLMs) up to version 0.3.1 that allows path traversal (accessing files outside the intended directory) through manipulation of a parameter called 'flag' in the /v1/file endpoint. The vulnerability has been publicly disclosed and could potentially be exploited.
CVE-2025-6854 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in Langchain-Chatchat software versions up to 0.3.1, specifically in a file upload endpoint. The vulnerability can be exploited remotely by attackers with login credentials and has already been publicly disclosed.
CVE-2025-6853 is a critical vulnerability in Langchain-Chatchat version 0.3.1 and earlier that allows attackers to exploit a path traversal (a type of attack where an attacker manipulates file paths to access files outside their intended directory) flaw in the upload_temp_docs backend function by manipulating the flag argument. The vulnerability can be exploited remotely by users with basic access permissions, and the exploit details have been publicly disclosed.
Roo Code is an AI tool that can automatically write code, and it stores settings in a `.roo/mcp.json` file that can execute commands. Before version 3.20.3, an attacker who could trick the AI (through prompt injection, a technique where hidden instructions are embedded in user input) into writing malicious commands to this file could run arbitrary code if the user had enabled automatic approval of file changes. This required multiple conditions: the attacker could submit prompts to the agent, the MCP (model context protocol, a system for connecting AI agents to external tools) feature was enabled, and auto-approval of writes was turned on.
Roo Code, an AI agent that writes code automatically, had a vulnerability (CVE-2025-53097) in versions before 3.20.3 where its file search tool ignored settings that should have blocked it from reading files outside the VS Code workspace (the folder a user is working in). An attacker could use prompt injection (tricking the AI by hiding instructions in its input) to make the agent read sensitive files and send that information over the network without user permission, though this attack required the attacker to already control what prompts the agent receives.
LLaMA-Factory, a library for training large language models, has a remote code execution vulnerability (RCE, where attackers can run malicious code on a victim's computer) in versions up to 0.9.3. Attackers can exploit this by uploading a malicious checkpoint file through the web interface, and the victim won't know they've been compromised because the vulnerable code loads files without proper safety checks.
iOS Simulator MCP Server (ios-simulator-mcp) versions before 1.3.3 have a command injection vulnerability (a security flaw where attackers insert shell commands into input that gets executed). The vulnerability exists because the `ui_tap` tool uses Node.js's `exec` function unsafely, allowing an attacker to trick an LLM through prompt injection (feeding hidden instructions to an AI to make it behave differently) to pass shell metacharacters like `;` or `&&` in parameters, which can execute unintended commands on the server's computer.
Claude Code is an AI-powered coding assistant available as extensions in popular coding editors (IDEs, or integrated development environments, which are software tools developers use to write code). Versions before 1.0.24 for VSCode and before 0.1.9 for JetBrains IDEs have a security flaw that lets attackers connect to the tool without permission when users visit malicious websites, potentially allowing them to read files, see what code you're working on, or even run code in certain situations.
The Aiomatic WordPress plugin (versions up to 2.5.0) has a security flaw where it doesn't properly check what type of files users are uploading, allowing authenticated attackers with basic user access to upload harmful files to the server. This could potentially lead to RCE (remote code execution, where an attacker can run commands on a system they don't own), though an attacker needs to provide a Stability.AI API key value to exploit it.
A Server-Side Request Forgery (SSRF, a vulnerability where an AI system makes unwanted requests to internal or local servers on behalf of an attacker) vulnerability exists in the RequestsToolkit component of the langchain-community package version 0.0.27. The flaw allows attackers to scan ports, access local services, steal cloud credentials, and interact with local network servers because the toolkit doesn't block requests to internal addresses.
MLflow versions before 3.1.0 have a vulnerability in the gateway_proxy_handler component where it fails to properly validate the gateway_path parameter, potentially allowing SSRF (server-side request forgery, where an attacker tricks the server into making unwanted requests to internal systems). This validation gap could be exploited to access resources the attacker shouldn't be able to reach.
FastGPT, an AI Agent building platform, has a vulnerability in versions before 4.9.12 where the LastRoute parameter on the login page is not properly validated or cleaned of malicious code. This allows attackers to perform open redirect (sending users to attacker-controlled websites) or DOM-based XSS (injecting malicious JavaScript that runs in the user's browser).
Cursor, a code editor designed for AI-assisted programming, had a security flaw in versions before 0.51.0 where JSON files could automatically trigger web requests without user approval. An attacker could exploit this, especially after a prompt injection attack (tricking the AI with hidden instructions in its input), to make the AI agent send data to a malicious website.
CVE-2025-32711 is a command injection vulnerability (a weakness where an attacker tricks a program into running unintended commands) in Microsoft 365 Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability has a CVSS severity score of 4.0 (a moderate rating on a 0-10 scale where 10 is most severe). Microsoft has published information about this vulnerability, but the provided source does not contain specific technical details about the attack or its impact.
FastGPT is an open-source platform for building AI workflows and chatbots that uses a sandbox (an isolated container designed to safely run untrusted code). Versions before 4.9.11 had weak isolation that allowed attackers to escape the sandbox by using overly permissive syscalls (system calls, which are requests programs make to the operating system), letting them read files, modify files, and bypass security restrictions. The vulnerability is fixed in version 4.9.11 by limiting which system calls are allowed to a safer set.
The Hive Support plugin for WordPress has a security flaw in versions up to 1.2.4 where two functions lack capability checks (security checks that verify user permissions). This allows attackers with basic Subscriber-level accounts to read and change the site's OpenAI API key, inspect data, and modify how the AI chatbot behaves.
AstrBot, a chatbot and development framework powered by large language models (LLMs, AI systems trained on large amounts of text data), has a path traversal vulnerability (a flaw that lets attackers access files they shouldn't be able to reach) in versions 3.4.4 through 3.5.12 that could expose sensitive information like API keys (credentials used to access external services) and passwords. The vulnerability was fixed in version 3.5.13.
vLLM (a system for running and serving large language models) versions 0.8.0 through 0.9.0 have a vulnerability where the /v1/chat/completions API endpoint doesn't properly check user input in the 'pattern' and 'type' fields when the tools feature is used, allowing a single malformed request to crash the inference worker (the part that actually runs the model) until someone restarts it.
CVE-2025-48943 is a Denial of Service vulnerability (a type of attack that crashes a system) in vLLM versions 0.8.0 through 0.8.x that causes the server to crash when given an invalid regex (a pattern used to match text). This happens specifically when using the structured output feature, which lets the AI format responses in a specific way.
Fix: Version 3.20.3 fixes the issue by adding an additional layer of opt-in configuration for auto-approving writing to Roo's configuration files, including all files within the `.roo/` folder.
NVD/CVE DatabaseFix: Upgrade to version 3.20.3 or later. According to the source, "Version 3.20.3 fixed the issue where `search_files` did not respect the setting to limit it to the workspace."
NVD/CVE DatabaseFix: Update to version 0.9.4, which contains a fix for the issue.
NVD/CVE DatabaseFix: Update to version 1.3.3, which contains a patch for the issue.
NVD/CVE DatabaseFix: Claude released a patch on June 13th, 2025. For VSCode and similar editors, open Extensions (View->Extensions), find Claude Code for VSCode, and update or uninstall any version prior to 1.0.24, then restart the editor. For JetBrains IDEs (IntelliJ, PyCharm, Android Studio), open the Plugins list, find Claude Code [Beta], update or uninstall any version prior to 0.1.9, and restart the IDE. The extension auto-updates when launched, but users should manually verify they have the patched version.
NVD/CVE DatabaseFix: This issue has been fixed in version 0.0.28. Users should upgrade langchain-ai/langchain to version 0.0.28 or later.
NVD/CVE DatabaseFix: Upgrade MLflow to version 3.1.0 or later. The fix is available in the official release at https://github.com/mlflow/mlflow/releases/tag/v3.1.0.
NVD/CVE DatabaseFix: Update FastGPT to version 4.9.12 or later, where this issue has been patched.
NVD/CVE DatabaseFix: The vulnerability is fixed in version 0.51.0. Users should update to this version or later.
NVD/CVE DatabaseFix: Update to version 4.9.11 or later. According to the source, this version patches the vulnerability by restricting the allowed system calls to a safer subset and adding additional descriptive error messaging.
NVD/CVE DatabaseSkyvern through version 0.1.85 has a vulnerability where attackers can inject malicious code into the Prompt field of workflow blocks through SSTI (server-side template injection, where untrusted input is processed as code by the server's template engine). Authenticated users can craft special expressions in Jinja2 templates (a template system that evaluates code on the server) that aren't properly cleaned up, allowing them to execute commands on the server without direct feedback, a capability known as blind RCE (remote code execution).
Fix: A fix is referenced in the GitHub commit db856cd8433a204c8b45979c70a4da1e119d949d in the Skyvern repository, but the source text does not explicitly describe what the fix does or provide a specific patched version number to upgrade to.
NVD/CVE DatabaseFix: Upgrade to version 3.5.13 or later. As a temporary workaround, users can edit the `cmd_config.json` file to disable the dashboard feature.
NVD/CVE DatabaseFix: Update to version 0.9.0 or later, which fixes the issue.
NVD/CVE DatabaseFix: Upgrade to version 0.9.0, which fixes the issue. A patch is available at https://github.com/vllm-project/vllm/commit/08bf7840780980c7568c573c70a6a8db94fd45ff.
NVD/CVE Database