aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3224 items

Special Issue Editorial: Brave New Work and the Future of Computing Professionals (Part 1)

inforesearchPeer-Reviewed
research
Dec 12, 2025

This editorial introduces a special issue examining how evolving information technology and society will shape the future of work, jobs, and professional roles. It calls for research that projects multiple possible futures, evaluates which outcomes are most valuable, and identifies steps organizations can take now to work toward their preferred future states.

AIS eLibrary (Journal of AIS, CAIS, etc.)

Optimal Online Control Strategy for Differentially Private Federated Learning

inforesearchPeer-Reviewed
privacy

A look at an Android ITW DNG exploit

infonews
security
Dec 12, 2025

Between July 2024 and February 2025, malicious DNG files (a raw image format) were discovered that exploited a Samsung vulnerability through the Quram image parsing library. These files were sent via WhatsApp and triggered a spyware infection when users clicked to download the images, which then allowed the malware to run within Samsung's com.samsung.ipservice process, a system service that automatically scans images for AI-powered features.

CVE-2025-66452: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, there is no handler for JSON parsing

mediumvulnerability
security
Dec 11, 2025
CVE-2025-66452

LibreChat (a ChatGPT alternative with extra features) versions 0.8.0 and below have a security flaw where JSON parsing errors aren't properly handled, causing user input to appear in error messages. This can expose HTML or JavaScript code in responses, creating an XSS risk (cross-site scripting, where attackers inject malicious code that runs in users' browsers).

CVE-2025-66451: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, when creating prompts, JSON requests

mediumvulnerability
security
Dec 11, 2025
CVE-2025-66451

LibreChat versions 0.8.0 and below have a vulnerability where JSON requests sent to modify prompts aren't properly checked for valid input, allowing users to change prompts in unintended ways through a PATCH endpoint (a request type that modifies existing data). The vulnerability occurs because the patchPromptGroup function passes user input directly without filtering out sensitive fields that shouldn't be modifiable.

CVE-2025-66450: LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, when a user posts a question, the ic

mediumvulnerability
security
Dec 11, 2025
CVE-2025-66450

LibreChat, a ChatGPT clone with extra features, has a vulnerability in versions 0.8.0 and below where an attacker can modify the iconURL parameter (a web address for an icon image) in chat posts. This malicious code gets saved and can be shared to other users, potentially exposing their private information through malicious trackers when they view the shared chat link. The vulnerability is caused by improper handling of HTML content (XSS, or cross-site scripting, where attackers inject malicious code into web pages).

Learning Generalizable Representations for Deepfake Detection With Realistic Sample Generation and Dual Augmentation

inforesearchPeer-Reviewed
research

Why Not Diversify Triggers? APK-Specific Backdoor Attack Against Android Malware Detection

inforesearchPeer-Reviewed
security

Introducing mrva, a terminal-first approach to CodeQL multi-repo variant analysis

infonews
securityresearch

CVE-2025-67511: Cybersecurity AI (CAI) is an open-source framework for building and deploying AI-powered offensive and defensive automat

criticalvulnerability
security
Dec 10, 2025
CVE-2025-67511

CVE-2025-67511 is a command injection vulnerability (a flaw where attackers can insert malicious commands into input) in Cybersecurity AI (CAI), an open-source framework for building AI agents that handle security tasks. Versions 0.5.9 and earlier are vulnerable because the run_ssh_command_with_credentials() function only escapes (protects) the password and command inputs, leaving the username, host, and port values open to attack.

CVE-2025-67510: Neuron is a PHP framework for creating and orchestrating AI Agents. In versions 2.8.11 and below, the MySQLWriteTool exe

criticalvulnerability
security
Dec 10, 2025
CVE-2025-67510

Neuron is a PHP framework for creating AI agents that can perform tasks, and versions 2.8.11 and earlier have a vulnerability in the MySQLWriteTool component. The tool runs database commands without checking if they're safe, allowing attackers to use prompt injection (tricking the AI by hiding instructions in its input) to execute harmful SQL commands like deleting entire tables or changing permissions if the database user has broad access rights.

CVE-2025-67509: Neuron is a PHP framework for creating and orchestrating AI Agents. Versions 2.8.11 and below use MySQLSelectTool, which

highvulnerability
security
Dec 10, 2025
CVE-2025-67509

Neuron is a PHP framework for building AI agents that can query databases. Versions 2.8.11 and below have a flaw in MySQLSelectTool, a component meant to safely let AI agents read from databases. The tool only checks if a command starts with SELECT and blocks certain words, but misses SQL commands like INTO OUTFILE that write files to disk. An attacker could use prompt injection (tricking an AI by hiding instructions in its input) through a public agent endpoint to write files to the database server if it has the right permissions.

An XSS Attack Detection Model Based on Two-Stage AST Analysis

inforesearchPeer-Reviewed
research

Blockchain-Enhanced Verifiable Secure Inference for Regulatable Privacy-Preserving Transactions

inforesearchPeer-Reviewed
security

Security Analysis of WiFi-Based Sensing Systems: Threats From Perturbation Attacks

inforesearchPeer-Reviewed
security

Toward Understanding the Tradeoff Between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

inforesearchPeer-Reviewed
security

Fairness-Aware Differential Privacy: A Fairly Proportional Noise Mechanism

inforesearchPeer-Reviewed
research

OWASP Top 10 for Agentic Applications – The Benchmark for Agentic Security in the Age of Autonomous AI

inforesearchIndustry
security

OWASP GenAI Security Project Releases Top 10 Risks and Mitigations for Agentic AI Security

inforesearchIndustry
safety

CVE-2025-33213: NVIDIA Merlin Transformers4Rec for Linux contains a vulnerability in the Trainer component, where a user could cause a d

highvulnerability
security
Dec 9, 2025
CVE-2025-33213

NVIDIA Merlin Transformers4Rec for Linux has a vulnerability in its Trainer component involving deserialization of untrusted data (treating unverified data as legitimate code or objects). A user exploiting this flaw could potentially run arbitrary code, crash the system (denial of service), steal information, or modify data.

Previous71 / 162Next
research
Dec 12, 2025

This research paper addresses a problem in differentially private federated learning (DP-FL, a technique that trains AI models across multiple devices while adding mathematical noise to protect privacy). The paper proposes a new control framework that dynamically adjusts both the amount of noise added and how many communication rounds occur during training, rather than using fixed or randomly adjusted noise levels. Experiments show this approach achieves faster convergence (reaching a good solution quicker) and better accuracy while maintaining the same privacy guarantees.

IEEE Xplore (Security & AI Journals)

Fix: The exploited Samsung vulnerability was fixed in April 2025.

Google Project Zero
NVD/CVE Database

Fix: Update to version 0.8.1, where this issue is fixed.

NVD/CVE Database

Fix: This issue is fixed in version 0.8.1. Users should upgrade to LibreChat version 0.8.1 or later.

NVD/CVE Database
Dec 11, 2025

This research addresses the problem that deepfake detection systems (AI trained to identify manipulated images created by generative models like GANs and diffusion models) often fail when encountering new or unfamiliar types of forgeries. The authors propose RSG-DA, a framework that improves detection by generating diverse fake samples and using a dual augmentation strategy (data transformation techniques applied in two different ways) to help the AI learn to recognize a wider range of forgery patterns, along with a lightweight module to make these learned patterns work better across different datasets.

IEEE Xplore (Security & AI Journals)
research
Dec 11, 2025

Researchers demonstrated a new attack method called ASBA (APK-Specific Backdoor Attack) that can compromise Android malware detection systems by injecting poisoned training data. Unlike previous attacks that use the same trigger across many malware samples, ASBA uses a generative adversarial network (GAN, an AI technique that learns to create realistic fake data) to generate unique triggers for each malware sample, making it harder for security tools to detect and block multiple instances of malware at once.

IEEE Xplore (Security & AI Journals)
Dec 11, 2025

GitHub's CodeQL multi-repository variant analysis (MRVA) lets you run security bug-finding queries across thousands of projects quickly, but it's built mainly for VS Code. A developer created mrva, a terminal-based alternative that runs on your machine and works with command-line tools, letting you download pre-built CodeQL databases (collections of code information), analyze them with queries, and display results in the terminal.

Trail of Bits Blog
NVD/CVE Database

Fix: Update to version 2.8.12, which fixes this issue.

NVD/CVE Database

Fix: Fixed in version 2.8.12.

NVD/CVE Database
security
Dec 10, 2025

XSS attacks (malicious code injected into websites to steal user data) are hard to detect because attackers can create adversarial samples that trick detection models into missing threats. This paper proposes a new detection model using two-stage AST (abstract syntax tree, a structural representation of code) analysis combined with LSTM (long short-term memory, a type of neural network good at processing sequences) to better identify malicious code while resisting adversarial tricks, achieving over 98.2% detection accuracy even against adversarial attacks.

IEEE Xplore (Security & AI Journals)
research
Dec 10, 2025

This research proposes a new system that combines blockchain (a decentralized ledger that records transactions) with zero-knowledge proofs (cryptographic methods that prove something is true without revealing the underlying data) to make AI model inference more trustworthy and private. The system verifies both where the input data comes from and where the AI model weights (the learned parameters that control how an AI makes decisions) come from, while keeping user information confidential. The authors demonstrate their approach with a privacy-preserving transaction system that can detect suspicious activity without exposing private data.

IEEE Xplore (Security & AI Journals)
research
Dec 10, 2025

WiFi-based sensing systems that use deep learning (AI models trained on large amounts of data) are vulnerable to adversarial perturbation attacks, where attackers subtly manipulate wireless signals to fool the system into making wrong predictions. Researchers developed WiIntruder, a new attack method that can work across different applications and evade detection, reducing the accuracy of WiFi sensing services by an average of 72.9%, highlighting a significant security gap in these systems.

IEEE Xplore (Security & AI Journals)
research
Dec 10, 2025

This research paper studies the challenge of balancing two competing goals in decentralized learning (where multiple computers train an AI model together without a central server): keeping each computer's data private while protecting against Byzantine attacks (when some computers deliberately send false information to sabotage the learning process). The authors found that using Gaussian noise (random mathematical noise added to messages) to protect privacy actually makes it harder to defend against Byzantine attacks, creating a fundamental tradeoff between these two security goals.

IEEE Xplore (Security & AI Journals)
privacy
Dec 10, 2025

This research proposes a Fairly Proportional Noise Mechanism (FPNM) to address a problem in differential privacy (DP, a technique that adds random noise to data to protect individual privacy while allowing statistical analysis). Traditional DP methods add noise uniformly without considering fairness, which can unfairly affect different groups of people differently, especially in decision-making and learning tasks. The new FPNM approach adjusts noise based on both its direction and size relative to the actual data values, reducing unfairness by about 17-19% in experiments while maintaining privacy protections.

IEEE Xplore (Security & AI Journals)
policy
Dec 10, 2025

OWASP has released a Top 10 list of security risks specifically for agentic AI applications, which are autonomous AI systems that can use tools and take actions on their own. This framework was built from real incidents and industry experience to help organizations secure these advanced AI systems as they become more common.

OWASP GenAI Security
policy
Dec 10, 2025

The OWASP GenAI Security Project (an open-source community focused on AI safety) has released a list of the top 10 security risks for agentic AI (AI systems that can take actions independently). This guidance was created with input from over 100 industry experts and is meant to help organizations understand and address threats to AI systems.

OWASP GenAI Security
NVD/CVE Database