Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
A vulnerability (CVE-2026-5002) was discovered in PromtEngineer localGPT that allows injection attacks (inserting malicious code into input) through the LLM Prompt Handler component in the backend/server.py file. An attacker can exploit this vulnerability remotely, and the exploit code has been publicly released. The vendor has not responded to disclosure attempts, and because the product uses rolling releases (continuous updates without traditional version numbers), specific patch information is unavailable.
A vulnerability (CVE-2026-4993) was found in wandb OpenUI up to version 1.0 where manipulating the LITELLM_MASTER_KEY argument in the backend/openui/config.py file can expose hard-coded credentials (passwords stored directly in the code). This vulnerability requires local access to exploit and has already been publicly disclosed, though the vendor did not respond to early notification.
Giskard Agents contain a server-side template injection vulnerability in the `ChatWorkflow.chat()` method, which treats user input as Jinja2 template code (a templating language that processes special syntax) instead of plain text. If a developer passes user-provided data directly to this method, an attacker can execute arbitrary code on the server by embedding malicious Jinja2 syntax in their input.
Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.9.0 where the Agentic Assistant feature would execute Python code generated by an LLM (large language model) on the server. An attacker who could access this feature and control what the model outputs could run arbitrary code (malicious commands) on the server itself.
Nanobot, a personal AI assistant, had a vulnerability in its email module that allowed attackers to send malicious prompts via email, which the bot would automatically process as trusted commands without the owner's knowledge. This is a type of indirect prompt injection (tricking an AI by hiding instructions in its input) that could let attackers run arbitrary system tools through the bot. Version 0.1.6 fixes this flaw.
LibreChat versions 0.8.2-rc1 through 0.8.3-rc1 have a vulnerability where user-created MCP (Model Context Protocol, a system for connecting AI models to external tools) servers can steal OAuth tokens (security credentials used for authentication). An attacker can create a malicious MCP server with special headers that trick LibreChat into substituting sensitive tokens, which are then leaked when victims use tools on that server.
LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2-rc3 have a security flaw in the SSE streaming endpoint (a real-time data connection) at `/api/agents/chat/stream/:streamId` that fails to check if a user actually owns a chat stream. This means any logged-in user can guess or obtain another user's stream ID and read their live conversations, including messages and AI responses, without permission.
LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2 have a vulnerability that allows attackers to access internal systems through SSRF (server-side request forgery, where an attacker tricks a server into making requests to resources it shouldn't access). Even though a previous SSRF fix was applied, it only checked domain names and didn't verify whether those names actually point to private IP addresses (internal network addresses), leaving the system exposed.
LibreChat, a ChatGPT alternative with extra features, has a security flaw in versions before 0.8.3 where a function called `isPrivateIP()` fails to recognize IPv4-mapped IPv6 addresses (IPv6 addresses that contain IPv4 address information) in a certain format, allowing logged-in users to bypass SSRF protection (SSRF is server-side request forgery, where an attacker tricks a server into making requests to internal networks it shouldn't access). This could let attackers access sensitive internal resources like cloud metadata services and private networks.
LangChain Core has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories using '../' sequences or absolute paths) in legacy functions that load prompt configurations from files. When an application accepts user-influenced prompt configs and passes them to `load_prompt()` or `load_prompt_from_config()`, attackers can read arbitrary files like secret credentials or configuration files, though they're limited to specific file types (.txt, .json, .yaml).
Langflow had a vulnerability where the code checking if a user owned a flow was missing when authentication was enabled, allowing any authenticated user to read, modify, or delete flows belonging to other users, including stealing embedded API keys. The fix removes the conditional logic and always checks that the requesting user owns the flow before allowing any operation.
The @mobilenext/mobile-mcp package has a path traversal vulnerability (a security flaw where an attacker can write files outside the intended directory by using special path characters like `../`) in its `mobile_save_screenshot` and `mobile_start_screen_recording` tools. The `saveTo` and `output` parameters are passed directly to file-writing functions without checking if the paths are valid, allowing an attacker to write files anywhere on the system.
The Azure Data Explorer MCP Server (adx-mcp-server) has KQL injection vulnerabilities (a type of code injection where untrusted input is inserted into database queries) in three tools that inspect database tables. Because the `table_name` parameter is directly inserted into Kusto queries (Azure's query language) using f-strings without checking or cleaning the input, an attacker or a prompt-injected AI agent can execute arbitrary database commands, including reading sensitive data or deleting tables.
n8n, a workflow automation tool, has an XSS vulnerability (cross-site scripting, where malicious code runs in a user's browser) in its credential management system. An authenticated user could hide JavaScript in an OAuth2 credential's Authorization URL field, and if another user clicks the OAuth authorization button, that malicious script executes in their browser session.
n8n versions before 1.123.27, 2.13.3, and 2.14.1 have a stored XSS (cross-site scripting, where attackers inject malicious code that runs when others visit a page) vulnerability in the Chat Trigger node's Custom CSS field. An authenticated user could bypass the sanitize-html library (a tool meant to remove dangerous code) and inject malicious JavaScript that would affect anyone visiting the public chat page.
n8n (a workflow automation tool) has a security flaw where authenticated users can inject malicious code or redirect users through unsanitized form fields, potentially enabling phishing attacks. The vulnerability affects the Form Node feature and requires authentication to exploit.
n8n, a workflow automation platform, has a stored XSS vulnerability (cross-site scripting, where malicious code is saved and runs when users visit a page) in its Form Trigger node that allows authenticated users to inject harmful scripts into forms. These scripts execute every time someone visits the published form, potentially hijacking form submissions or conducting phishing attacks, though the platform's Content Security Policy (a browser security feature that restricts what scripts can do) prevents direct theft of session cookies.
A code injection vulnerability (CVE-2026-4963) was found in huggingface smolagents version 1.25.0.dev0, specifically in functions within the local_python_executor.py file that were supposed to fix a previous vulnerability. An attacker can exploit this flaw remotely by injecting malicious code, and the exploit is publicly available, though the vendor has not responded to disclosure attempts.
In MLflow (a machine learning tool for managing experiments), when basic authentication is enabled, certain endpoints that show trace information (a record of how the AI made decisions) and allow users to assess traces are not properly checking user permissions. This means any logged-in user can view traces and create assessments even if they shouldn't have access to them, risking exposure of sensitive information and unauthorized changes.
Open WebUI has an insecure direct object reference (IDOR, a flaw where an app doesn't properly check if a user should access specific data) in its retrieval API that lets any authenticated user read other users' private memories and uploaded files by guessing collection names like 'user-memory-{USER_UUID}' or 'file-{FILE_UUID}'. The vulnerability exists because the API checks that a user is logged in, but doesn't verify they own the data they're requesting.
Fix: Update to giskard-agents version 0.3.4 (stable branch) or 1.0.2b1 (pre-release branch). The fix replaces the unsandboxed Jinja2 Environment with SandboxedEnvironment, which blocks access to attributes starting with underscores and prevents the class traversal attacks that enable remote code execution.
GitHub Advisory DatabaseFix: Update to version 1.9.0, which fixes the issue.
NVD/CVE DatabaseFix: Update nanobot to version 0.1.6 or later, which patches the vulnerability in the email channel processing module.
NVD/CVE DatabaseFix: Update to version 0.8.3-rc2, which fixes the issue.
NVD/CVE DatabaseFix: Version 0.8.2 patches the issue.
NVD/CVE DatabaseFix: Update to version 0.8.3-rc1, which contains a patch for this vulnerability.
NVD/CVE DatabaseFix: Update LibreChat to version 0.8.3, which fixes the issue.
NVD/CVE DatabaseFix: Update `langchain-core` to version 1.2.22 or later. The fix adds path validation that rejects absolute paths and '..' traversal sequences by default. Users can pass `allow_dangerous_paths=True` to `load_prompt()` and `load_prompt_from_config()` if they need to load from trusted inputs. Additionally, migrate away from these deprecated legacy functions to the newer `dumpd`/`dumps`/`load`/`loads` serialization APIs from `langchain_core.load`, which don't read from the filesystem and use an allowlist-based security model instead.
GitHub Advisory DatabaseFix: The fix (PR #8956) removes the AUTO_LOGIN conditional and unconditionally scopes all flow queries to the requesting user by adding `.where(Flow.user_id == user_id)` to the database query. This single change covers all three vulnerable operations (read, update, delete) since they all route through the same `_read_flow` helper. A regression test called `test_read_flows_user_isolation` was added.
GitHub Advisory DatabaseFix: The issue has been fixed in n8n versions 2.8.0 and 2.6.4. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should limit credential creation and sharing permissions to fully trusted users only, or restrict access to the n8n instance to trusted users only. Note: these workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.
GitHub Advisory DatabaseFix: Upgrade to n8n version 1.123.27, 2.13.3, 2.14.1, or later. If upgrading is not immediately possible, temporarily: (1) restrict workflow creation and editing permissions to trusted users only, or (2) disable the Chat Trigger node by adding `@n8n/n8n-nodes-langchain.chatTrigger` to the `NODES_EXCLUDE` environment variable. These workarounds do not fully fix the risk and should only be used as short-term measures.
GitHub Advisory DatabaseFix: Upgrade to n8n version 1.123.24, 2.10.4, or 2.12.0 or later. If immediate upgrade is not possible, temporary workarounds include: (1) restrict workflow creation and editing permissions to trusted users only, (2) disable the Form node by adding 'n8n-nodes-base.form' to the NODES_EXCLUDE environment variable, or (3) disable the Form Trigger node by adding 'n8n-nodes-base.formTrigger' to the NODES_EXCLUDE environment variable. Note that workarounds do not fully eliminate the risk and are only short-term measures.
GitHub Advisory DatabaseFix: The issue has been fixed in n8n versions 2.12.0, 2.11.2, and 1.123.25. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can temporarily: (1) limit workflow creation and editing permissions to fully trusted users only, or (2) disable the Form Trigger node by adding `n8n-nodes-base.formTrigger` to the `NODES_EXCLUDE` environment variable. The source notes these workarounds do not fully remediate the risk and should only be short-term measures.
GitHub Advisory Database