aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
23
[LAST_7D]
176
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 131/276
VIEW ALL
01

CVE-2026-22813: OpenCode is an open source AI coding agent. The markdown renderer used for LLM responses will insert arbitrary HTML into

security
Jan 12, 2026

OpenCode, an open source AI coding agent, has a vulnerability in its markdown renderer that allows arbitrary HTML to be inserted into the web interface without proper sanitization (blocking of malicious code). Because there is no protection like DOMPurify (a tool that removes dangerous HTML) or CSP (content security policy, rules that restrict what code can run), an attacker who controls what the AI outputs could execute JavaScript (code that runs in the browser) on the local web interface.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026

Fix: This vulnerability is fixed in version 1.1.10.

NVD/CVE Database
02

CVE-2026-22812: OpenCode is an open source AI coding agent. Prior to 1.0.216, OpenCode automatically starts an unauthenticated HTTP serv

security
Jan 12, 2026

OpenCode is an open source AI coding agent that, before version 1.0.216, automatically started an unauthenticated HTTP server (a service that accepts web requests without requiring a password or login). This allowed any local process or website with permissive CORS (a web setting that controls which websites can access a server) to execute arbitrary shell commands with the user's privileges, meaning someone could run malicious commands on the affected computer.

Fix: Update to version 1.0.216 or later. The vulnerability is fixed in 1.0.216.

NVD/CVE Database
03

CVE-2025-14279: MLFlow versions up to and including 3.4.0 are vulnerable to DNS rebinding attacks due to a lack of Origin header validat

security
Jan 12, 2026

MLFlow versions up to 3.4.0 have a vulnerability where the REST server (the interface that external programs use to communicate with MLFlow) doesn't properly validate Origin headers, which are security checks that prevent unauthorized websites from making requests. This allows attackers to use DNS rebinding attacks (tricks where malicious websites disguise their identity to bypass security protections) to query, modify, or delete experiments, potentially stealing or destroying data.

Fix: The issue is resolved in version 3.5.0.

NVD/CVE Database
04

Armor: Shielding Unlearnable Examples Against Data Augmentation

securityprivacy
Jan 12, 2026

Unlearnable examples are protective noises added to private data to prevent AI models from learning useful information from them, but this paper shows that data augmentation (a common technique that creates variations of training data to improve model performance) can undo this protection and restore learnability from 21.3% to 66.1% accuracy. The researchers propose Armor, a defense framework that adds protective noise while accounting for data augmentation effects, using a surrogate model (a practice model used to simulate the real training process) and smart augmentation selection to keep private data unlearnable even after augmentation is applied.

Fix: The paper proposes Armor, a defense framework that works by: (1) designing a non-local module-assisted surrogate model to better capture the effect of data augmentation, (2) using a surrogate augmentation selection strategy that maximizes distribution alignment between augmented and non-augmented samples to choose the optimal augmentation strategy for each class, and (3) using a dynamic step size adjustment algorithm to enhance the defensive noise generation process. The authors state that 'Armor can preserve the unlearnability of protected private data under data augmentation' and plan to open-source the code upon publication.

IEEE Xplore (Security & AI Journals)
05

CVE-2026-22773: vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users

security
Jan 10, 2026

vLLM is a serving engine for running large language models, and versions 0.6.4 through 0.11.x have a vulnerability where attackers can crash the server by sending a tiny 1x1 pixel image to models using the Idefics3 vision component, causing a dimension mismatch (a size incompatibility between data structures) that terminates the entire service.

Fix: This issue has been patched in version 0.12.0. Users should upgrade to vLLM version 0.12.0 or later.

NVD/CVE Database
06

CVE-2025-14980: The BetterDocs plugin for WordPress is vulnerable to Sensitive Information Exposure in all versions up to, and including

security
Jan 9, 2026

The BetterDocs plugin for WordPress (all versions up to 4.3.3) has a vulnerability that exposes sensitive information, allowing authenticated attackers with contributor-level access or higher to extract data including OpenAI API keys stored in the plugin settings through the scripts() function. This affects any WordPress site using the plugin where users have contributor-level permissions or above.

Fix: Update to version 4.3.4 or later, as indicated by the WordPress plugin repository changeset reference showing the fix was applied in that version.

NVD/CVE Database
07

CVE-2025-69222: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 is prone to a server-side request forgery (SSRF

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has a server-side request forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended targets) because the Actions feature allows agents to access any remote service without restrictions, including internal components like the RAG API (retrieval-augmented generation system that pulls in external documents). This means attackers could potentially use LibreChat to access internal systems they shouldn't reach.

NVD/CVE Database
08

CVE-2025-69221: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 does not enforce proper access control when que

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has an access control vulnerability where authenticated attackers (users who have logged in) can read permissions of any agent (a predefined AI assistant with specific instructions) without proper authorization, even if they shouldn't have access to that agent. If an attacker knows an agent's ID number, they can view permissions that other users have been granted for that agent.

Fix: This issue is fixed in version 0.8.2-rc2.

NVD/CVE Database
09

CVE-2025-69220: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 does not enforce proper access control for file

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has a missing authorization (a failure to check if a user has permission to do something) vulnerability that allows an authenticated attacker to upload files to any agent's file storage if they know the agent's ID, even without proper permissions. This could let attackers change how agents behave by adding malicious files.

Fix: This issue is fixed in version 0.8.2-rc2. Users should update to this version or later.

NVD/CVE Database
10

CVE-2025-14371: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to unauthorized m

security
Jan 6, 2026

A WordPress plugin called 'Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI' has a security flaw (CWE-862, missing authorization) in versions up to 3.41.0 that allows contributors and higher-level users to add or remove taxonomy terms (tags and categories) on any post, even ones they don't own, due to missing permission checks. This vulnerability affects authenticated users who have contributor-level access or above.

NVD/CVE Database
Prev1...129130131132133...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026