aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 235/371
VIEW ALL
01

LibPass: An Entropy-Guided Black-Box Adversarial Attack Against Third-Party Library Detection Tools in the Wild

securityresearch
Dec 3, 2025

Researchers discovered a serious weakness in tools designed to detect third-party libraries (external code that apps use) in Android applications. They created LibPass, an attack method that generates tricked versions of apps that can fool these detection tools into missing dangerous or non-compliant libraries, with success rates reaching up to 99%. The study reveals that current detection tools are not robust enough to withstand intentional attacks, which puts users at risk since unsafe libraries could hide inside apps.

IEEE Xplore (Security & AI Journals)
02

v0.14.9

industry
Dec 2, 2025

LlamaIndex released version 0.14.9 with updates across multiple components, including bug fixes for vector stores (systems that store converted data in a format AI models can search), support for new AI models like Claude Opus 4.5 and GPT-5.1, and improvements to integrations with services like Azure, Bedrock, and Qdrant. The release addresses issues with memory management, async operations (non-blocking code that runs in parallel), and various database connectors.

LlamaIndex Security Releases
03

CVE-2025-66448: vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote co

security
Dec 1, 2025

vLLM (a tool for running large language models) versions before 0.11.1 have a critical security flaw where loading a model configuration can execute malicious code from the internet without the user's permission. An attacker can create a fake model that appears safe but secretly downloads and runs harmful code from another location, even when users try to block remote code by setting trust_remote_code=False (a security setting meant to prevent exactly this).

Fix: This vulnerability is fixed in vLLM version 0.11.1. Users should update to this version or later.

NVD/CVE Database
04

AI Safety Newsletter #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back

safetyresearch
Dec 1, 2025

The Center for AI Safety launched an AI Dashboard that evaluates frontier AI models (the most advanced AI systems currently available) on capability and safety benchmarks, ranking them across text, vision, and risk categories. The Risk Index specifically measures how likely models are to exhibit dangerous behaviors like dual-use biology assistance (helping with harmful biological research), jailbreaking vulnerability (susceptibility to tricks that bypass safety features), overconfidence, deception, and harmful actions, with Claude Opus 4.5 currently scoring safest at 33.6 on a 0-100 scale (lower is safer). The dashboard also tracks industry progress toward broader automation milestones like AGI (artificial general intelligence, systems that can perform any intellectual task) and self-driving vehicles.

CAIS AI Safety Newsletter
05

Frequency Bias Matters: Diving Into Robust and Generalized Deep Image Forgery Detection

securityresearch
Dec 1, 2025

AI-generated image forgeries created by tools like GANs (generative adversarial networks, AI models that create fake images) are hard to detect reliably, especially when facing new types of fakes or noisy images. Researchers found that forgery detectors fail because of frequency bias (a tendency to focus on certain patterns in image data while ignoring others), and they developed a frequency alignment method that can either attack these detectors or strengthen them by removing differences between real and fake images in how they look at the frequency level.

Fix: The source proposes a two-step frequency alignment method to remove the frequency discrepancy between real and fake images. According to the text, this method 'can serve as a strong black-box attack against forgery detectors in the anti-forensic context or, conversely, as a universal defense to improve detector reliability in the forensic context.' The authors developed corresponding attack and defense implementations and demonstrated their effectiveness across twelve detectors, eight forgery models, and five evaluation metrics.

IEEE Xplore (Security & AI Journals)
06

CVE-2025-66201: LibreChat is a ChatGPT clone with additional features. Prior to version 0.8.1-rc2, LibreChat is vulnerable to Server-sid

security
Nov 29, 2025

LibreChat, a ChatGPT alternative with extra features, had a vulnerability in versions before 0.8.1-rc2 where an authenticated user could exploit the "Actions" feature by uploading malicious OpenAPI specs (interface documents that describe how to connect to external services) to perform SSRF (server-side request forgery, where the server itself is tricked into accessing restricted URLs on the attacker's behalf). This could allow attackers to reach sensitive services like cloud metadata endpoints that are normally hidden from regular users.

Fix: Update LibreChat to version 0.8.1-rc2 or later, where this issue has been patched.

NVD/CVE Database
07

CVE-2025-12638: Keras version 3.11.3 is affected by a path traversal vulnerability in the keras.utils.get_file() function when extractin

security
Nov 28, 2025

Keras version 3.11.3 has a path traversal vulnerability (a security flaw where attackers can write files outside the intended directory) in the keras.utils.get_file() function when extracting tar archives (compressed file formats). The function fails to properly validate file paths during extraction, allowing an attacker to write files anywhere on the system, potentially compromising it or executing malicious code.

NVD/CVE Database
08

CVE-2025-13381: The AI ChatBot with ChatGPT and Content Generator by AYS plugin for WordPress is vulnerable to unauthorized access due t

security
Nov 27, 2025

The AI ChatBot with ChatGPT and Content Generator plugin for WordPress (versions up to 2.7.0) has a missing authorization check (a security control that verifies a user has permission to perform an action) in its 'ays_chatgpt_save_wp_media' function, allowing unauthenticated attackers to upload media files without logging in. This vulnerability affects all versions through 2.7.0.

Fix: Update to version 2.7.1 or later, which includes a fix for the missing authorization check as shown in the changeset referenced in the vulnerability report.

NVD/CVE Database
09

CVE-2025-13378: The AI ChatBot with ChatGPT and Content Generator by AYS plugin for WordPress is vulnerable to Server-Side Request Forge

security
Nov 27, 2025

CVE-2025-13378 is a vulnerability in the AI ChatBot with ChatGPT and Content Generator plugin for WordPress that allows SSRF (server-side request forgery, where an attacker tricks a server into making unwanted network requests on their behalf). The vulnerability exists in the plugin code, with references to affected code in versions 2.6.9 and earlier.

Fix: The vulnerability was fixed in version 2.7.1, as shown by the changeset comparison between version 2.6.9 and version 2.7.1 of the admin file in the WordPress plugin repository.

NVD/CVE Database
10

CVE-2025-62593: Ray is an AI compute engine. Prior to version 2.52.0, developers working with Ray as a development tool can be exploited

security
Nov 26, 2025

Ray, an AI compute engine, had a critical vulnerability before version 2.52.0 that allowed attackers to run code on a developer's computer (RCE, or remote code execution) through Firefox and Safari browsers. The vulnerability exploited a weak security check that only looked at the User-Agent header (a piece of information browsers send to websites) combined with DNS rebinding attacks (tricks that redirect browser requests to unexpected servers), allowing attackers to compromise developers who visited malicious websites or ads.

Fix: Update to Ray version 2.52.0 or later, as this issue has been patched in that version.

NVD/CVE Database
Prev1...233234235236237...371Next