aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 231/371
VIEW ALL
01

Optimal Online Control Strategy for Differentially Private Federated Learning

privacyresearch
Dec 12, 2025

This research paper addresses a problem in differentially private federated learning (DP-FL, a technique that trains AI models across multiple devices while adding mathematical noise to protect privacy). The paper proposes a new control framework that dynamically adjusts both the amount of noise added and how many communication rounds occur during training, rather than using fixed or randomly adjusted noise levels. Experiments show this approach achieves faster convergence (reaching a good solution quicker) and better accuracy while maintaining the same privacy guarantees.

IEEE Xplore (Security & AI Journals)
02

CVE-2025-66452: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, there is no handler for JSON parsing

security
Dec 11, 2025

LibreChat (a ChatGPT alternative with extra features) versions 0.8.0 and below have a security flaw where JSON parsing errors aren't properly handled, causing user input to appear in error messages. This can expose HTML or JavaScript code in responses, creating an XSS risk (cross-site scripting, where attackers inject malicious code that runs in users' browsers).

NVD/CVE Database
03

CVE-2025-66451: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, when creating prompts, JSON requests

security
Dec 11, 2025

LibreChat versions 0.8.0 and below have a vulnerability where JSON requests sent to modify prompts aren't properly checked for valid input, allowing users to change prompts in unintended ways through a PATCH endpoint (a request type that modifies existing data). The vulnerability occurs because the patchPromptGroup function passes user input directly without filtering out sensitive fields that shouldn't be modifiable.

Fix: Update to version 0.8.1, where this issue is fixed.

NVD/CVE Database
04

CVE-2025-66450: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, when a user posts a question, the ic

security
Dec 11, 2025

LibreChat, a ChatGPT clone with extra features, has a vulnerability in versions 0.8.0 and below where an attacker can modify the iconURL parameter (a web address for an icon image) in chat posts. This malicious code gets saved and can be shared to other users, potentially exposing their private information through malicious trackers when they view the shared chat link. The vulnerability is caused by improper handling of HTML content (XSS, or cross-site scripting, where attackers inject malicious code into web pages).

Fix: This issue is fixed in version 0.8.1. Users should upgrade to LibreChat version 0.8.1 or later.

NVD/CVE Database
05

M&M: Secure Two-Party Machine Learning Through Modulus Conversion and Mixed-Mode Protocols

research
Dec 11, 2025

M&M is a framework that improves secure two-party machine learning (where two parties compute on data without revealing it to each other) by using an efficient modulus conversion protocol (a technique that converts numbers between different mathematical domains used by different encryption methods). The framework integrates various cryptographic tools more efficiently, achieving 6–100 times faster approximated truncations (rounding operations) and 4–5 times faster communication and runtime for machine learning tasks.

IEEE Xplore (Security & AI Journals)
06

Learning Generalizable Representations for Deepfake Detection With Realistic Sample Generation and Dual Augmentation

research
Dec 11, 2025

This research addresses the problem that deepfake detection systems (AI trained to identify manipulated images created by generative models like GANs and diffusion models) often fail when encountering new or unfamiliar types of forgeries. The authors propose RSG-DA, a framework that improves detection by generating diverse fake samples and using a dual augmentation strategy (data transformation techniques applied in two different ways) to help the AI learn to recognize a wider range of forgery patterns, along with a lightweight module to make these learned patterns work better across different datasets.

IEEE Xplore (Security & AI Journals)
07

Why Not Diversify Triggers? APK-Specific Backdoor Attack Against Android Malware Detection

securityresearch
Dec 11, 2025

Researchers demonstrated a new attack method called ASBA (APK-Specific Backdoor Attack) that can compromise Android malware detection systems by injecting poisoned training data. Unlike previous attacks that use the same trigger across many malware samples, ASBA uses a generative adversarial network (GAN, an AI technique that learns to create realistic fake data) to generate unique triggers for each malware sample, making it harder for security tools to detect and block multiple instances of malware at once.

IEEE Xplore (Security & AI Journals)
08

Introducing mrva, a terminal-first approach to CodeQL multi-repo variant analysis

securityresearch
Dec 11, 2025

GitHub's CodeQL multi-repository variant analysis (MRVA) lets you run security bug-finding queries across thousands of projects quickly, but it's built mainly for VS Code. A developer created mrva, a terminal-based alternative that runs on your machine and works with command-line tools, letting you download pre-built CodeQL databases (collections of code information), analyze them with queries, and display results in the terminal.

Trail of Bits Blog
09

CVE-2025-67511: Cybersecurity AI (CAI) is an open-source framework for building and deploying AI-powered offensive and defensive automat

security
Dec 10, 2025

CVE-2025-67511 is a command injection vulnerability (a flaw where attackers can insert malicious commands into input) in Cybersecurity AI (CAI), an open-source framework for building AI agents that handle security tasks. Versions 0.5.9 and earlier are vulnerable because the run_ssh_command_with_credentials() function only escapes (protects) the password and command inputs, leaving the username, host, and port values open to attack.

NVD/CVE Database
10

CVE-2025-67510: Neuron is a PHP framework for creating and orchestrating AI Agents. In versions 2.8.11 and below, the MySQLWriteTool exe

security
Dec 10, 2025

Neuron is a PHP framework for creating AI agents that can perform tasks, and versions 2.8.11 and earlier have a vulnerability in the MySQLWriteTool component. The tool runs database commands without checking if they're safe, allowing attackers to use prompt injection (tricking the AI by hiding instructions in its input) to execute harmful SQL commands like deleting entire tables or changing permissions if the database user has broad access rights.

Fix: Update to version 2.8.12, which fixes this issue.

NVD/CVE Database
Prev1...229230231232233...371Next