aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 226/371
VIEW ALL
01

CVE-2025-69222: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 is prone to a server-side request forgery (SSRF

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has a server-side request forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended targets) because the Actions feature allows agents to access any remote service without restrictions, including internal components like the RAG API (retrieval-augmented generation system that pulls in external documents). This means attackers could potentially use LibreChat to access internal systems they shouldn't reach.

NVD/CVE Database
02

CVE-2025-69221: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 does not enforce proper access control when que

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has an access control vulnerability where authenticated attackers (users who have logged in) can read permissions of any agent (a predefined AI assistant with specific instructions) without proper authorization, even if they shouldn't have access to that agent. If an attacker knows an agent's ID number, they can view permissions that other users have been granted for that agent.

Fix: This issue is fixed in version 0.8.2-rc2.

NVD/CVE Database
03

CVE-2025-69220: LibreChat is a ChatGPT clone with additional features. Version 0.8.1-rc2 does not enforce proper access control for file

security
Jan 7, 2026

LibreChat version 0.8.1-rc2 has a missing authorization (a failure to check if a user has permission to do something) vulnerability that allows an authenticated attacker to upload files to any agent's file storage if they know the agent's ID, even without proper permissions. This could let attackers change how agents behave by adding malicious files.

Fix: This issue is fixed in version 0.8.2-rc2. Users should update to this version or later.

NVD/CVE Database
04

CVE-2025-14371: The Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI plugin for WordPress is vulnerable to unauthorized m

security
Jan 6, 2026

A WordPress plugin called 'Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI' has a security flaw (CWE-862, missing authorization) in versions up to 3.41.0 that allows contributors and higher-level users to add or remove taxonomy terms (tags and categories) on any post, even ones they don't own, due to missing permission checks. This vulnerability affects authenticated users who have contributor-level access or above.

NVD/CVE Database
05

CVE-2026-0621: Anthropic's MCP TypeScript SDK versions up to and including 1.25.1 contain a regular expression denial of service (ReDoS

security
Jan 5, 2026

Anthropic's MCP TypeScript SDK (a toolkit for building AI applications) versions up to 1.25.1 has a ReDoS vulnerability (regular expression denial of service, where a maliciously designed input causes the regex parser to work extremely hard and freeze the system) in its UriTemplate class. An attacker can send a specially crafted URI (web address) that makes the Node.js process (the JavaScript runtime environment) consume excessive CPU and stop responding, causing the application to crash or become unavailable.

NVD/CVE Database
06

Revisiting Out-of-Distribution Detection in Real-Time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm

researchsafety
Jan 5, 2026

Out-of-distribution (OoD, inputs that don't match what an AI was trained on) detection in object detection systems causes AI models to make overconfident wrong predictions on objects they shouldn't recognize. This paper reveals that popular benchmark datasets used to test OoD detection have quality problems, where up to 13% of test objects are mislabeled, making current methods appear better than they really are. The authors propose a new training-time approach where object detectors are fine-tuned using carefully created OoD training data that looks similar to normal objects, which reduces false detections by 91% in YOLO models.

Fix: The paper introduces a training-time mitigation paradigm where 'we fine-tune the detector using a carefully synthesized OoD dataset that semantically resembles in-distribution objects.' This approach 'shapes a defensive decision boundary by suppressing objectness on OoD objects' and achieves 'a 91% reduction in hallucination error of a YOLO model on BDD-100 K.' The methodology is shown to work across multiple detection architectures including YOLO, Faster R-CNN, and RT-DETR.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-15453: A security vulnerability has been detected in milvus up to 2.6.7. This vulnerability affects the function expr.Exec of t

security
Jan 5, 2026

A security vulnerability (CVE-2025-15453) exists in Milvus versions up to 2.6.7 in the expr.Exec function, where an attacker can manipulate the code argument to trigger deserialization (converting untrusted data back into executable code), allowing remote exploitation with user credentials. The vulnerability has been publicly disclosed and is rated as medium severity (CVSS 5.3).

Fix: A fix is planned for the next release 2.6.8.

NVD/CVE Database
08

CVE-2026-21445: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.7.0.dev45, multiple cr

security
Jan 2, 2026

Langflow, a tool for building AI-powered agents and workflows, has a security flaw in versions before 1.7.0.dev45 where some API endpoints (the interfaces that software uses to communicate and request data) are missing authentication controls (checks to verify who is using them). This allows anyone without a login to access private user conversations, transaction histories, and delete messages. The vulnerability affects endpoints that handle sensitive personal data and system operations.

Fix: Update to version 1.7.0.dev45 or later, which contains a patch for this vulnerability.

NVD/CVE Database
09

CVE-2026-21452: MessagePack for Java is a serializer implementation for Java. A denial-of-service vulnerability exists in versions prior

security
Jan 2, 2026

MessagePack for Java has a denial-of-service vulnerability in versions before 0.9.11 where specially crafted .msgpack files can trick the library into allocating massive amounts of memory. When the library deserializes (reads and converts) these files, it blindly trusts the size information in EXT32 objects (an extension data type) and tries to allocate a byte array matching that size, which can be impossibly large, causing the Java program to run out of memory and crash.

Fix: Update to version 0.9.11 or later, which fixes the vulnerability.

NVD/CVE Database
10

UQLM: A Python Package for Uncertainty Quantification in Large Language Models

researchsafety
Dec 31, 2025

Hallucinations (instances where Large Language Models generate false or misleading content) are a safety problem for AI applications. The paper introduces UQLM, a Python package that uses uncertainty quantification (UQ, a statistical technique for measuring how confident a model is in its answer) to detect when an LLM is likely hallucinating by assigning confidence scores between 0 and 1 to responses.

Fix: The source describes UQLM as 'an off-the-shelf solution for UQ-based hallucination detection that can be easily integrated to enhance the reliability of LLM outputs.' No specific implementation steps, code examples, or version details are provided in the source text.

JMLR (Journal of Machine Learning Research)
Prev1...224225226227228...371Next