aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
5
[LAST_7D]
161
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 2/265
VIEW ALL
01

CVE-2026-31950: LibreChat is a ChatGPT clone with additional features. In versions 0.8.2-rc2 through 0.8.2-rc3, the SSE streaming endpoi

security
Mar 27, 2026

LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2-rc3 have a security flaw in the SSE streaming endpoint (a real-time data connection) at `/api/agents/chat/stream/:streamId` that fails to check if a user actually owns a chat stream. This means any logged-in user can guess or obtain another user's stream ID and read their live conversations, including messages and AI responses, without permission.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Fix: Version 0.8.2 patches the issue.

NVD/CVE Database
02

CVE-2026-31945: LibreChat is a ChatGPT clone with additional features. Versions 0.8.2-rc2 through 0.8.2 are vulnerable to a server-side

security
Mar 27, 2026

LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2 have a vulnerability that allows attackers to access internal systems through SSRF (server-side request forgery, where an attacker tricks a server into making requests to resources it shouldn't access). Even though a previous SSRF fix was applied, it only checked domain names and didn't verify whether those names actually point to private IP addresses (internal network addresses), leaving the system exposed.

Fix: Update to version 0.8.3-rc1, which contains a patch for this vulnerability.

NVD/CVE Database
03

CVE-2026-31943: LibreChat is a ChatGPT clone with additional features. Prior to version 0.8.3, `isPrivateIP()` in `packages/api/src/auth

security
Mar 27, 2026

LibreChat, a ChatGPT alternative with extra features, has a security flaw in versions before 0.8.3 where a function called `isPrivateIP()` fails to recognize IPv4-mapped IPv6 addresses (IPv6 addresses that contain IPv4 address information) in a certain format, allowing logged-in users to bypass SSRF protection (SSRF is server-side request forgery, where an attacker tricks a server into making requests to internal networks it shouldn't access). This could let attackers access sensitive internal resources like cloud metadata services and private networks.

Fix: Update LibreChat to version 0.8.3, which fixes the issue.

NVD/CVE Database
04

GHSA-qh6h-p6c9-ff54: LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions

security
Mar 27, 2026

LangChain Core has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories using '../' sequences or absolute paths) in legacy functions that load prompt configurations from files. When an application accepts user-influenced prompt configs and passes them to `load_prompt()` or `load_prompt_from_config()`, attackers can read arbitrary files like secret credentials or configuration files, though they're limited to specific file types (.txt, .json, .yaml).

Fix: Update `langchain-core` to version 1.2.22 or later. The fix adds path validation that rejects absolute paths and '..' traversal sequences by default. Users can pass `allow_dangerous_paths=True` to `load_prompt()` and `load_prompt_from_config()` if they need to load from trusted inputs. Additionally, migrate away from these deprecated legacy functions to the newer `dumpd`/`dumps`/`load`/`loads` serialization APIs from `langchain_core.load`, which don't read from the filesystem and use an allowlist-based security model instead.

GitHub Advisory Database
05

GHSA-8c4j-f57c-35cf: Langflow: Authenticated Users Can Read, Modify, and Delete Any Flow via Missing Ownership Check

security
Mar 27, 2026

Langflow had a vulnerability where the code checking if a user owned a flow was missing when authentication was enabled, allowing any authenticated user to read, modify, or delete flows belonging to other users, including stealing embedded API keys. The fix removes the conditional logic and always checks that the requesting user owns the flow before allowing any operation.

Fix: The fix (PR #8956) removes the AUTO_LOGIN conditional and unconditionally scopes all flow queries to the requesting user by adding `.where(Flow.user_id == user_id)` to the database query. This single change covers all three vulnerable operations (read, update, delete) since they all route through the same `_read_flow` helper. A regression test called `test_read_flows_user_isolation` was added.

GitHub Advisory Database
06

GHSA-3p2m-h2v6-g9mx: @mobilenext/mobile-mcp alllows arbitrary file write via Path Traversal in mobile screen capture tools

security
Mar 27, 2026

The @mobilenext/mobile-mcp package has a path traversal vulnerability (a security flaw where an attacker can write files outside the intended directory by using special path characters like `../`) in its `mobile_save_screenshot` and `mobile_start_screen_recording` tools. The `saveTo` and `output` parameters are passed directly to file-writing functions without checking if the paths are valid, allowing an attacker to write files anywhere on the system.

GitHub Advisory Database
07

GHSA-vphc-468g-8rfp: Azure Data Explorer MCP Server: KQL Injection in multiple tools allows MCP client to execute arbitrary Kusto queries

security
Mar 27, 2026

The Azure Data Explorer MCP Server (adx-mcp-server) has KQL injection vulnerabilities (a type of code injection where untrusted input is inserted into database queries) in three tools that inspect database tables. Because the `table_name` parameter is directly inserted into Kusto queries (Azure's query language) using f-strings without checking or cleaning the input, an attacker or a prompt-injected AI agent can execute arbitrary database commands, including reading sensitive data or deleting tables.

GitHub Advisory Database
08

The latest in data centers, AI, and energy 

policyindustry
Mar 27, 2026

Large data centers that power AI systems require massive amounts of electricity and resources, creating conflicts with communities, power grids, and the environment worldwide. Tech companies are expanding these facilities rapidly, leading to legal battles, environmental concerns, and pushback from local communities over issues like electricity costs, water usage, and pollution.

The Verge (AI)
09

GHSA-364x-8g5j-x2pr: n8n has XSS in its Credential Management Flow

security
Mar 27, 2026

n8n, a workflow automation tool, has an XSS vulnerability (cross-site scripting, where malicious code runs in a user's browser) in its credential management system. An authenticated user could hide JavaScript in an OAuth2 credential's Authorization URL field, and if another user clicks the OAuth authorization button, that malicious script executes in their browser session.

Fix: The issue has been fixed in n8n versions 2.8.0 and 2.6.4. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should limit credential creation and sharing permissions to fully trusted users only, or restrict access to the n8n instance to trusted users only. Note: these workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.

GitHub Advisory Database
10

GHSA-3c7f-5hgj-h279: n8n has XSS in Chat Trigger Node through Custom CSS

security
Mar 27, 2026

n8n versions before 1.123.27, 2.13.3, and 2.14.1 have a stored XSS (cross-site scripting, where attackers inject malicious code that runs when others visit a page) vulnerability in the Chat Trigger node's Custom CSS field. An authenticated user could bypass the sanitize-html library (a tool meant to remove dangerous code) and inject malicious JavaScript that would affect anyone visiting the public chat page.

Fix: Upgrade to n8n version 1.123.27, 2.13.3, 2.14.1, or later. If upgrading is not immediately possible, temporarily: (1) restrict workflow creation and editing permissions to trusted users only, or (2) disable the Chat Trigger node by adding `@n8n/n8n-nodes-langchain.chatTrigger` to the `NODES_EXCLUDE` environment variable. These workarounds do not fully fix the risk and should only be used as short-term measures.

GitHub Advisory Database
Prev1234...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026