aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
23
[LAST_7D]
176
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 137/276
VIEW ALL
01

Learning Generalizable Representations for Deepfake Detection With Realistic Sample Generation and Dual Augmentation

research
Dec 11, 2025

This research addresses the problem that deepfake detection systems (AI trained to identify manipulated images created by generative models like GANs and diffusion models) often fail when encountering new or unfamiliar types of forgeries. The authors propose RSG-DA, a framework that improves detection by generating diverse fake samples and using a dual augmentation strategy (data transformation techniques applied in two different ways) to help the AI learn to recognize a wider range of forgery patterns, along with a lightweight module to make these learned patterns work better across different datasets.

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
IEEE Xplore (Security & AI Journals)
02

Why Not Diversify Triggers? APK-Specific Backdoor Attack Against Android Malware Detection

securityresearch
Dec 11, 2025

Researchers demonstrated a new attack method called ASBA (APK-Specific Backdoor Attack) that can compromise Android malware detection systems by injecting poisoned training data. Unlike previous attacks that use the same trigger across many malware samples, ASBA uses a generative adversarial network (GAN, an AI technique that learns to create realistic fake data) to generate unique triggers for each malware sample, making it harder for security tools to detect and block multiple instances of malware at once.

IEEE Xplore (Security & AI Journals)
03

Introducing mrva, a terminal-first approach to CodeQL multi-repo variant analysis

securityresearch
Dec 11, 2025

GitHub's CodeQL multi-repository variant analysis (MRVA) lets you run security bug-finding queries across thousands of projects quickly, but it's built mainly for VS Code. A developer created mrva, a terminal-based alternative that runs on your machine and works with command-line tools, letting you download pre-built CodeQL databases (collections of code information), analyze them with queries, and display results in the terminal.

Trail of Bits Blog
04

CVE-2025-67511: Cybersecurity AI (CAI) is an open-source framework for building and deploying AI-powered offensive and defensive automat

security
Dec 10, 2025

CVE-2025-67511 is a command injection vulnerability (a flaw where attackers can insert malicious commands into input) in Cybersecurity AI (CAI), an open-source framework for building AI agents that handle security tasks. Versions 0.5.9 and earlier are vulnerable because the run_ssh_command_with_credentials() function only escapes (protects) the password and command inputs, leaving the username, host, and port values open to attack.

NVD/CVE Database
05

CVE-2025-67510: Neuron is a PHP framework for creating and orchestrating AI Agents. In versions 2.8.11 and below, the MySQLWriteTool exe

security
Dec 10, 2025

Neuron is a PHP framework for creating AI agents that can perform tasks, and versions 2.8.11 and earlier have a vulnerability in the MySQLWriteTool component. The tool runs database commands without checking if they're safe, allowing attackers to use prompt injection (tricking the AI by hiding instructions in its input) to execute harmful SQL commands like deleting entire tables or changing permissions if the database user has broad access rights.

Fix: Update to version 2.8.12, which fixes this issue.

NVD/CVE Database
06

CVE-2025-67509: Neuron is a PHP framework for creating and orchestrating AI Agents. Versions 2.8.11 and below use MySQLSelectTool, which

security
Dec 10, 2025

Neuron is a PHP framework for building AI agents that can query databases. Versions 2.8.11 and below have a flaw in MySQLSelectTool, a component meant to safely let AI agents read from databases. The tool only checks if a command starts with SELECT and blocks certain words, but misses SQL commands like INTO OUTFILE that write files to disk. An attacker could use prompt injection (tricking an AI by hiding instructions in its input) through a public agent endpoint to write files to the database server if it has the right permissions.

Fix: Fixed in version 2.8.12.

NVD/CVE Database
07

An XSS Attack Detection Model Based on Two-Stage AST Analysis

researchsecurity
Dec 10, 2025

XSS attacks (malicious code injected into websites to steal user data) are hard to detect because attackers can create adversarial samples that trick detection models into missing threats. This paper proposes a new detection model using two-stage AST (abstract syntax tree, a structural representation of code) analysis combined with LSTM (long short-term memory, a type of neural network good at processing sequences) to better identify malicious code while resisting adversarial tricks, achieving over 98.2% detection accuracy even against adversarial attacks.

IEEE Xplore (Security & AI Journals)
08

Fairness-Aware Differential Privacy: A Fairly Proportional Noise Mechanism

researchprivacy
Dec 10, 2025

This research proposes a Fairly Proportional Noise Mechanism (FPNM) to address a problem in differential privacy (DP, a technique that adds random noise to data to protect individual privacy while allowing statistical analysis). Traditional DP methods add noise uniformly without considering fairness, which can unfairly affect different groups of people differently, especially in decision-making and learning tasks. The new FPNM approach adjusts noise based on both its direction and size relative to the actual data values, reducing unfairness by about 17-19% in experiments while maintaining privacy protections.

IEEE Xplore (Security & AI Journals)
09

Security Analysis of WiFi-Based Sensing Systems: Threats From Perturbation Attacks

securityresearch
Dec 10, 2025

WiFi-based sensing systems that use deep learning (AI models trained on large amounts of data) are vulnerable to adversarial perturbation attacks, where attackers subtly manipulate wireless signals to fool the system into making wrong predictions. Researchers developed WiIntruder, a new attack method that can work across different applications and evade detection, reducing the accuracy of WiFi sensing services by an average of 72.9%, highlighting a significant security gap in these systems.

IEEE Xplore (Security & AI Journals)
10

Toward Understanding the Tradeoff Between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

securityresearch
Dec 10, 2025

This research paper studies the challenge of balancing two competing goals in decentralized learning (where multiple computers train an AI model together without a central server): keeping each computer's data private while protecting against Byzantine attacks (when some computers deliberately send false information to sabotage the learning process). The authors found that using Gaussian noise (random mathematical noise added to messages) to protect privacy actually makes it harder to defend against Byzantine attacks, creating a fundamental tradeoff between these two security goals.

IEEE Xplore (Security & AI Journals)
Prev1...135136137138139...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026