aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 23/265
VIEW ALL
01

Navigating Security Tradeoffs of AI Agents

securitysafety
Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Mar 18, 2026

AI agents, like the open-source Clawdbot, are becoming more powerful and autonomous but introduce serious security risks because attackers can compromise them through the open-source supply chain. Two major attack types threaten AI systems: model file attacks (where malicious code is hidden in AI model files uploaded to trusted repositories) and rug pull attacks (where attackers compromise MCP servers, which are tools that give AI agents capabilities, to perform malicious actions). The article notes that traditional security methods don't yet exist for protecting AI agents, and a single corrupted component can spread threats across many teams.

Fix: The source explicitly recommends: 'Teams must scan model files with tools that can parse machine learning formats, and load models in isolated containers, virtual machines or browser sandboxes.' For rug pull attacks specifically, the article states that 'the alternative is to use remote MCP servers whose code is maintained by trusted organizations' like GitHub, which 'reduces the risk of an MCP rug pull attack' (though it does not prevent malicious actions from the tools themselves).

Palo Alto Unit 42
02

GHSA-gjgx-rvqr-6w6v: Mesop Affected by Unauthenticated Remote Code Execution via Test Suite Route /exec-py

security
Mar 18, 2026

Mesop contains a critical vulnerability in its testing module where a `/exec-py` route accepts Python code without any authentication checks and executes it directly on the server. This allows anyone who can send an HTTP request to the endpoint to run arbitrary commands on the machine hosting the application, a flaw known as unauthenticated remote code execution (RCE, where an attacker runs commands on a system they don't own).

GitHub Advisory Database
03

GHSA-8qvf-mr4w-9x2c: Mesop has a Path Traversal utilizing `FileStateSessionBackend` leads to Application Denial of Service and File Write/Deletion

security
Mar 18, 2026

Mesop has a path traversal vulnerability (a technique where an attacker uses sequences like `../` to escape intended directory boundaries) in its file-based session backend that allows attackers to read, write, or delete arbitrary files on the server by crafting malicious `state_token` values in messages sent to the `/ui` endpoint. This can crash the application or give attackers unauthorized access to system files.

GitHub Advisory Database
04

ChatGPT did not cure a dog’s cancer

safety
Mar 18, 2026

A story claimed that ChatGPT helped cure an Australian entrepreneur's dog of cancer, generating widespread attention as evidence that AI could revolutionize medicine. However, the article suggests this narrative is more complicated than the promoted version, implying the reality behind the claim differs from what was publicly reported.

The Verge (AI)
05

GHSA-22cc-p3c6-wpvm: h3 has a Server-Sent Events Injection via Unsanitized Newlines in Event Stream Fields

security
Mar 18, 2026

The h3 library has a vulnerability in its Server-Sent Events (SSE, a protocol for pushing real-time messages from a server to connected clients) implementation where newline characters in message fields are not removed before being sent. An attacker who controls any message field (id, event, data, or comment) can inject newline characters to break the SSE format and trick clients into receiving fake events, potentially forcing aggressive reconnections or manipulating which past events are replayed.

GitHub Advisory Database
06

'Claudy Day’ Trio of Flaws Exposes Claude Users to Data Theft

security
Mar 18, 2026

Researchers discovered three connected flaws in Claude (an AI assistant) that can work together to steal user data, starting with a prompt injection attack (tricking the AI by hiding malicious instructions in its input) combined with a Google search vulnerability. This attack chain could potentially compromise enterprise networks that rely on Claude.

Dark Reading
07

Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive Breaches

securitysafety
Mar 18, 2026

Shadow AI refers to AI systems hidden within SaaS applications (software services accessed online) that operate without proper oversight, creating security risks that can lead to major data breaches. The article emphasizes that organizations lack visibility into these autonomous AI systems and calls for better monitoring and control mechanisms to manage agentic AI (AI that can independently take actions to achieve goals).

SecurityWeek
08

GHSA-3xm7-qw7j-qc8v: SSRF in @aborruso/ckan-mcp-server via base_url allows access to internal networks

security
Mar 18, 2026

The @aborruso/ckan-mcp-server tool allows attackers to make HTTP requests to any address by controlling the `base_url` parameter, which has no validation or filtering. An attacker can use prompt injection (tricking the AI by hiding instructions in its input) to make the tool scan internal networks or steal cloud credentials, but exploitation requires the victim's AI assistant to have this server connected.

Fix: The source explicitly recommends: (1) Validate `base_url` against a configurable allowlist of permitted CKAN portals, (2) Block private IP ranges (RFC 1918, link-local addresses like 169.254.x.x), (3) Block cloud metadata endpoints (169.254.169.254), (4) Sanitize SQL input for datastore queries, and (5) Implement a SPARQL endpoint allowlist.

GitHub Advisory Database
09

GHSA-rf6x-r45m-xv3w: Langflow is Missing Ownership Verification in API Key Deletion (IDOR)

security
Mar 18, 2026

Langflow has a security flaw called IDOR (insecure direct object reference, where an attacker can access or modify resources belonging to other users) in its API key deletion feature. An authenticated attacker can delete other users' API keys by guessing their IDs, because the deletion endpoint doesn't verify that the API key belongs to the person making the request. This could allow attackers to disable other users' integrations or take over their accounts.

Fix: Modify the delete_api_key endpoint and function by: (1) passing current_user to the delete function; (2) adding a verification check in delete_api_key() that confirms api_key.user_id == current_user.id before deletion; (3) returning a 403 Forbidden error if the user doesn't own the key. Example code provided: 'if api_key.user_id != user_id: raise HTTPException(status_code=403, detail="Unauthorized")'

GitHub Advisory Database
10

The Download: The Pentagon’s new AI plans, and next-gen nuclear reactors

securitypolicy
Mar 18, 2026

The Pentagon is planning to create secure environments where AI companies can train their models on classified military data, which would embed sensitive intelligence like surveillance reports into the AI systems themselves and bring these companies closer to classified information than before. This represents a major shift from current use of AI models like Claude in classified settings, but introduces unique security risks by allowing models to learn from rather than just access classified data.

MIT Technology Review
Prev1...2122232425...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026