aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 232/371
VIEW ALL
01

CVE-2025-67509: Neuron is a PHP framework for creating and orchestrating AI Agents. Versions 2.8.11 and below use MySQLSelectTool, which

security
Dec 10, 2025

Neuron is a PHP framework for building AI agents that can query databases. Versions 2.8.11 and below have a flaw in MySQLSelectTool, a component meant to safely let AI agents read from databases. The tool only checks if a command starts with SELECT and blocks certain words, but misses SQL commands like INTO OUTFILE that write files to disk. An attacker could use prompt injection (tricking an AI by hiding instructions in its input) through a public agent endpoint to write files to the database server if it has the right permissions.

Fix: Fixed in version 2.8.12.

NVD/CVE Database
02

An XSS Attack Detection Model Based on Two-Stage AST Analysis

researchsecurity
Dec 10, 2025

XSS attacks (malicious code injected into websites to steal user data) are hard to detect because attackers can create adversarial samples that trick detection models into missing threats. This paper proposes a new detection model using two-stage AST (abstract syntax tree, a structural representation of code) analysis combined with LSTM (long short-term memory, a type of neural network good at processing sequences) to better identify malicious code while resisting adversarial tricks, achieving over 98.2% detection accuracy even against adversarial attacks.

IEEE Xplore (Security & AI Journals)
03

Fairness-Aware Differential Privacy: A Fairly Proportional Noise Mechanism

researchprivacy
Dec 10, 2025

This research proposes a Fairly Proportional Noise Mechanism (FPNM) to address a problem in differential privacy (DP, a technique that adds random noise to data to protect individual privacy while allowing statistical analysis). Traditional DP methods add noise uniformly without considering fairness, which can unfairly affect different groups of people differently, especially in decision-making and learning tasks. The new FPNM approach adjusts noise based on both its direction and size relative to the actual data values, reducing unfairness by about 17-19% in experiments while maintaining privacy protections.

IEEE Xplore (Security & AI Journals)
04

Security Analysis of WiFi-Based Sensing Systems: Threats From Perturbation Attacks

securityresearch
Dec 10, 2025

WiFi-based sensing systems that use deep learning (AI models trained on large amounts of data) are vulnerable to adversarial perturbation attacks, where attackers subtly manipulate wireless signals to fool the system into making wrong predictions. Researchers developed WiIntruder, a new attack method that can work across different applications and evade detection, reducing the accuracy of WiFi sensing services by an average of 72.9%, highlighting a significant security gap in these systems.

IEEE Xplore (Security & AI Journals)
05

Toward Understanding the Tradeoff Between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

securityresearch
Dec 10, 2025

This research paper studies the challenge of balancing two competing goals in decentralized learning (where multiple computers train an AI model together without a central server): keeping each computer's data private while protecting against Byzantine attacks (when some computers deliberately send false information to sabotage the learning process). The authors found that using Gaussian noise (random mathematical noise added to messages) to protect privacy actually makes it harder to defend against Byzantine attacks, creating a fundamental tradeoff between these two security goals.

IEEE Xplore (Security & AI Journals)
06

Blockchain-Enhanced Verifiable Secure Inference for Regulatable Privacy-Preserving Transactions

securityresearch
Dec 10, 2025

This research proposes a new system that combines blockchain (a decentralized ledger that records transactions) with zero-knowledge proofs (cryptographic methods that prove something is true without revealing the underlying data) to make AI model inference more trustworthy and private. The system verifies both where the input data comes from and where the AI model weights (the learned parameters that control how an AI makes decisions) come from, while keeping user information confidential. The authors demonstrate their approach with a privacy-preserving transaction system that can detect suspicious activity without exposing private data.

IEEE Xplore (Security & AI Journals)
07

OWASP Top 10 for Agentic Applications – The Benchmark for Agentic Security in the Age of Autonomous AI

securitypolicy
Dec 10, 2025

OWASP has released a Top 10 list of security risks specifically for agentic AI applications, which are autonomous AI systems that can use tools and take actions on their own. This framework was built from real incidents and industry experience to help organizations secure these advanced AI systems as they become more common.

OWASP GenAI Security
08

OWASP GenAI Security Project Releases Top 10 Risks and Mitigations for Agentic AI Security

safetypolicy
Dec 10, 2025

The OWASP GenAI Security Project (an open-source community focused on AI safety) has released a list of the top 10 security risks for agentic AI (AI systems that can take actions independently). This guidance was created with input from over 100 industry experts and is meant to help organizations understand and address threats to AI systems.

OWASP GenAI Security
09

CVE-2025-33213: NVIDIA Merlin Transformers4Rec for Linux contains a vulnerability in the Trainer component, where a user could cause a d

security
Dec 9, 2025

NVIDIA Merlin Transformers4Rec for Linux has a vulnerability in its Trainer component involving deserialization of untrusted data (treating unverified data as legitimate code or objects). A user exploiting this flaw could potentially run arbitrary code, crash the system (denial of service), steal information, or modify data.

NVD/CVE Database
10

CVE-2025-64671: Improper neutralization of special elements used in a command ('command injection') in Copilot allows an unauthorized at

security
Dec 9, 2025

CVE-2025-64671 is a command injection vulnerability (a flaw where an attacker can inject malicious commands into input that gets executed) in Copilot that allows an unauthorized attacker to execute code locally on a system. The vulnerability stems from improper handling of special characters in commands, and Microsoft has documented it as a known issue.

NVD/CVE Database
Prev1...230231232233234...371Next