aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 225/371
VIEW ALL
01

CVE-2024-58339: LlamaIndex (run-llama/llama_index) versions up to and including 0.12.2 contain an uncontrolled resource consumption vuln

security
Jan 12, 2026

LlamaIndex versions up to 0.12.2 have a vulnerability where the VannaPack VannaQueryEngine takes user prompts, converts them to SQL statements, and runs them without limits on how much computing power they use. An attacker can exploit this by submitting prompts that trigger expensive SQL operations, causing the system to run out of CPU or memory (a denial-of-service attack, where a service becomes unavailable).

NVD/CVE Database
02

CVE-2024-14021: LlamaIndex (run-llama/llama_index) versions up to and including 0.11.6 contain an unsafe deserialization vulnerability i

security
Jan 12, 2026

LlamaIndex versions up to 0.11.6 contain a vulnerability where the BGEM3Index.load_from_disk() function uses pickle.load() (a Python method that converts stored data back into objects) to read files from a user-provided directory without checking if they're safe. An attacker could provide a malicious pickle file that executes arbitrary code (runs any commands they want) when a victim loads the index from disk.

NVD/CVE Database
03

CVE-2026-22252: LibreChat is a ChatGPT clone with additional features. Prior to v0.8.2-rc2, LibreChat's MCP stdio transport accepts arbi

security
Jan 12, 2026

LibreChat, a ChatGPT clone with extra features, has a vulnerability in versions before v0.8.2-rc2 where its MCP stdio transport (a communication method for connecting components) accepts commands without checking if they're safe, letting any logged-in user run shell commands as root inside a container with just one API request. This is a serious authorization flaw because it bypasses permission checks.

Fix: Update to v0.8.2-rc2 or later. According to the source, 'This vulnerability is fixed in v0.8.2-rc2.'

NVD/CVE Database
04

CVE-2026-22813: OpenCode is an open source AI coding agent. The markdown renderer used for LLM responses will insert arbitrary HTML into

security
Jan 12, 2026

OpenCode, an open source AI coding agent, has a vulnerability in its markdown renderer that allows arbitrary HTML to be inserted into the web interface without proper sanitization (blocking of malicious code). Because there is no protection like DOMPurify (a tool that removes dangerous HTML) or CSP (content security policy, rules that restrict what code can run), an attacker who controls what the AI outputs could execute JavaScript (code that runs in the browser) on the local web interface.

Fix: This vulnerability is fixed in version 1.1.10.

NVD/CVE Database
05

CVE-2026-22812: OpenCode is an open source AI coding agent. Prior to 1.0.216, OpenCode automatically starts an unauthenticated HTTP serv

security
Jan 12, 2026

OpenCode is an open source AI coding agent that, before version 1.0.216, automatically started an unauthenticated HTTP server (a service that accepts web requests without requiring a password or login). This allowed any local process or website with permissive CORS (a web setting that controls which websites can access a server) to execute arbitrary shell commands with the user's privileges, meaning someone could run malicious commands on the affected computer.

Fix: Update to version 1.0.216 or later. The vulnerability is fixed in 1.0.216.

NVD/CVE Database
06

CVE-2025-14279: MLFlow versions up to and including 3.4.0 are vulnerable to DNS rebinding attacks due to a lack of Origin header validat

security
Jan 12, 2026

MLFlow versions up to 3.4.0 have a vulnerability where the REST server (the interface that external programs use to communicate with MLFlow) doesn't properly validate Origin headers, which are security checks that prevent unauthorized websites from making requests. This allows attackers to use DNS rebinding attacks (tricks where malicious websites disguise their identity to bypass security protections) to query, modify, or delete experiments, potentially stealing or destroying data.

Fix: The issue is resolved in version 3.5.0.

NVD/CVE Database
07

Armor: Shielding Unlearnable Examples Against Data Augmentation

securityprivacy
Jan 12, 2026

Unlearnable examples are protective noises added to private data to prevent AI models from learning useful information from them, but this paper shows that data augmentation (a common technique that creates variations of training data to improve model performance) can undo this protection and restore learnability from 21.3% to 66.1% accuracy. The researchers propose Armor, a defense framework that adds protective noise while accounting for data augmentation effects, using a surrogate model (a practice model used to simulate the real training process) and smart augmentation selection to keep private data unlearnable even after augmentation is applied.

Fix: The paper proposes Armor, a defense framework that works by: (1) designing a non-local module-assisted surrogate model to better capture the effect of data augmentation, (2) using a surrogate augmentation selection strategy that maximizes distribution alignment between augmented and non-augmented samples to choose the optimal augmentation strategy for each class, and (3) using a dynamic step size adjustment algorithm to enhance the defensive noise generation process. The authors state that 'Armor can preserve the unlearnability of protected private data under data augmentation' and plan to open-source the code upon publication.

IEEE Xplore (Security & AI Journals)
08

Model Lineage Analysis: Determination and Closeness Measurement

research
Jan 12, 2026

This research addresses how to identify whether one machine learning model is derived from another model through modification techniques (adjusting or fine-tuning an existing model rather than training from scratch), and how to measure how much two models differ from each other. The authors propose a method that determines lineage (derivative relationships) by checking if two models' parameters exist in the same local optimum of the loss landscape (the mathematical space of possible model configurations), and measure closeness by analyzing how their decision boundaries (the lines or surfaces that separate different predictions) differ from each other.

IEEE Xplore (Security & AI Journals)
09

CVE-2026-22773: vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users

security
Jan 10, 2026

vLLM is a serving engine for running large language models, and versions 0.6.4 through 0.11.x have a vulnerability where attackers can crash the server by sending a tiny 1x1 pixel image to models using the Idefics3 vision component, causing a dimension mismatch (a size incompatibility between data structures) that terminates the entire service.

Fix: This issue has been patched in version 0.12.0. Users should upgrade to vLLM version 0.12.0 or later.

NVD/CVE Database
10

CVE-2025-14980: The BetterDocs plugin for WordPress is vulnerable to Sensitive Information Exposure in all versions up to, and including

security
Jan 9, 2026

The BetterDocs plugin for WordPress (all versions up to 4.3.3) has a vulnerability that exposes sensitive information, allowing authenticated attackers with contributor-level access or higher to extract data including OpenAI API keys stored in the plugin settings through the scripts() function. This affects any WordPress site using the plugin where users have contributor-level permissions or above.

Fix: Update to version 4.3.4 or later, as indicated by the WordPress plugin repository changeset reference showing the fix was applied in that version.

NVD/CVE Database
Prev1...223224225226227...371Next