aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3117 items

Responsible AI Question Bank for Risk Assessment

inforesearchPeer-Reviewed
safetyresearch
Mar 16, 2026

This is an academic survey article published in ACM Computing Surveys that discusses a question bank designed to help assess risks in AI systems responsibly. The article appears to be a comprehensive review of how organizations can evaluate potential harms and safety concerns when developing or deploying AI, rather than describing a specific vulnerability or problem.

ACM Digital Library (TOPS, DTRAP, CSUR)

Building Trust in Artificial Intelligence: A Systematic Review through the Lens of Trust Theory

inforesearchPeer-Reviewed
research

Detecting Training Data For Large Language Models: A Survey

inforesearchPeer-Reviewed
security

Bias-Free? An Empirical Study on Ethnicity, Gender, and Age Fairness in Deepfake Detection

inforesearchPeer-Reviewed
research

Adaptive Real-Time Financial Fraud Detection with Explainable AI Tools

inforesearchPeer-Reviewed
research

Enhancing Digital Security: A Novel Dual-Paradigm Approach for Robust Deepfake Detection Using Pre and Post Quantum-Trained Neural Networks

inforesearchPeer-Reviewed
research

Hybrid Machine Learning–Based Trust Management Approach to Secure the Mobile Crowdsourcing

inforesearchPeer-Reviewed
security

Teens sue Musk's xAI over Grok's pornographic images of them

infonews
safetypolicy

GHSA-ffx7-75gc-jg7c: File Browser TUS Negative Upload-Length Fires Post-Upload Hooks Prematurely

mediumvulnerability
security
Mar 16, 2026
CVE-2026-32759

A vulnerability in File Browser's TUS resumable upload handler fails to validate that the Upload-Length header is non-negative. When an attacker supplies a negative value like -1, the first PATCH request immediately triggers the completion condition (0 >= -1 is true), causing after_upload hooks (automated scripts that run after file uploads) to fire with empty or partial files. An authenticated user with upload permission can trigger these hooks repeatedly with any filename, even without actually uploading data.

Benjamin Netanyahu is struggling to prove he’s not an AI clone

infonews
safetysecurity

AGentVLM: Access control policy generation and verification framework with language models

inforesearchPeer-Reviewed
research

AMF-CFL: Anomaly model filtering based on clustering in federated learning

inforesearchPeer-Reviewed
security

Explainable android malware detection and malicious code localization using graph attention

inforesearchPeer-Reviewed
research

Fed-Adapt: A Federated Learning Framework for Adaptive Topology Reconfiguration Against Multi-Rate DDoS and Database Flooding Attacks

inforesearchPeer-Reviewed
research

Large language model (LLM) for software security: Code analysis, malware analysis, reverse engineering

inforesearchPeer-Reviewed
research

VFEFL: Privacy-preserving federated learning against malicious clients via verifiable functional encryption

inforesearchPeer-Reviewed
security

Towards few-shot malware classification with fine-grained and pattern-aware multi-prototype network

inforesearchPeer-Reviewed
research

Vuln2Action: An LLM-based framework for generating vulnerability reproduction steps and mapping exploits

inforesearchPeer-Reviewed
research

Multi-modal malware classification with hierarchical consistency and saliency-constrained adversarial training

inforesearchPeer-Reviewed
research

Personalized differential privacy for high-dimensional data: A random sampling and pruning privacy tree approach

inforesearchPeer-Reviewed
security
Previous8 / 156Next
safety
Mar 16, 2026

This academic paper is a systematic review published in ACM Computing Surveys that examines how trust works in artificial intelligence systems using established trust theory frameworks. The article analyzes trust in AI through theoretical lenses rather than addressing a specific technical vulnerability or problem.

ACM Digital Library (TOPS, DTRAP, CSUR)
research
Mar 16, 2026

This survey article reviews methods for detecting training data used to build large language models (LLMs, which are AI systems trained on massive amounts of text to generate human-like responses). The paper examines various techniques that researchers have developed to identify and extract information about what data was used to train these models, which is important for understanding model behavior and potential privacy concerns.

ACM Digital Library (TOPS, DTRAP, CSUR)
safety
Mar 16, 2026

This research paper studies whether deepfake detection systems (AI tools that identify fake videos made to look real) are fair across different groups of people based on ethnicity, gender, and age. The study found that these detection systems often perform differently depending on the person's background, meaning they work better for some groups than others. The paper highlights that bias in deepfake detection is an important fairness problem that needs attention.

ACM Digital Library (TOPS, DTRAP, CSUR)
security
Mar 16, 2026

This academic paper discusses using explainable AI (AI systems that can show their reasoning for decisions) to detect financial fraud as it happens in real time. The research focuses on making fraud detection systems that adapt to new fraud patterns while also being transparent about why they flag transactions as suspicious.

ACM Digital Library (TOPS, DTRAP, CSUR)
security
Mar 16, 2026

This research paper proposes a new method for detecting deepfakes (AI-generated fake videos or images) by using neural networks (computer systems loosely modeled on how brains learn) trained with both current and quantum computing approaches. The dual approach aims to make deepfake detection more reliable and harder for attackers to bypass.

ACM Digital Library (TOPS, DTRAP, CSUR)
research
Mar 16, 2026

This research article proposes a hybrid machine learning approach to improve trust management and security in mobile crowdsourcing (a system where mobile users contribute data or complete tasks for a distributed project). The study combines multiple machine learning techniques to identify trustworthy participants and protect against malicious actors in crowdsourcing environments.

ACM Digital Library (TOPS, DTRAP, CSUR)
Mar 16, 2026

Teenagers are suing xAI (Elon Musk's artificial intelligence company) because Grok, their chatbot, allowed users to create sexually explicit images of the teens without their permission. The lawsuit focuses on a feature called 'spicy mode' that was released last year, which could generate fake nude or sexual images of real people, including minors, and was shared on platforms like Discord and Telegram.

Fix: By mid-January, X said that it would implement 'technological measures' to stop Grok's ability to undress people in photos. Additionally, regulatory investigations were launched by UK watchdog Ofcom, the European Commission, and California into the feature's ability to create sexualized images of real people, particularly children.

BBC Technology
GitHub Advisory Database
Mar 16, 2026

Social media is spreading conspiracy theories that Israeli Prime Minister Benjamin Netanyahu has been replaced by deepfakes (AI-generated fake videos or images that look real), pointing to supposed errors like extra fingers in videos as evidence. While there is little credible evidence Netanyahu is actually dead or injured, the ability of AI to convincingly create fake images, videos, and audio of real people makes it harder to definitively prove these rumors false.

The Verge (AI)
Mar 16, 2026

AGentVLM is a framework that uses small language models (AI systems trained on text) to automatically convert written organizational rules into access control policies (rules defining who can access what resources). The system avoids using large third-party AI services, keeping data private, and can handle complex requirements like purposes and conditions while verifying that generated policies are accurate before they're put into use.

Elsevier Security Journals
research
Mar 16, 2026

Federated learning (a system where multiple participants train a shared AI model without sharing their raw data) is vulnerable to attacks from malicious clients who send harmful model updates. This paper proposes AMF-CFL, a defense method that uses multi-k means clustering (a technique for grouping similar data points) and z-score statistical analysis (a way to identify unusual values) to filter out malicious updates and protect the global model, even when clients have non-i.i.d. data distributions (when each participant's data differs significantly in type and quantity).

Fix: AMF-CFL reduces the influence of malicious updates through a two-step filtering strategy: it first applies multi-k means clustering to identify anomalous update patterns, followed by z-score-based statistical analysis to refine the selection of benign updates.

Elsevier Security Journals
security
Mar 16, 2026

This research paper presents XAIDroid, a framework that uses graph neural networks (GNNs, machine learning models that analyze relationships between connected pieces of data) and graph attention mechanisms to automatically identify and locate malicious code within Android apps. The system represents app code as API call graphs (visual maps of how different functions communicate) and assigns importance scores to pinpoint which specific code sections are malicious, achieving high accuracy rates of 97.27% recall at the class level.

Elsevier Security Journals
security
Mar 16, 2026

Fed-Adapt is a federated learning framework (a system where multiple computers learn together while keeping their data private) designed to defend networks against DDoS attacks (floods of traffic meant to overwhelm servers) and database flooding attacks (requests that exhaust database resources). The framework addresses the challenge of detecting and responding to these sophisticated attacks in real-time while protecting data privacy across distributed networks, which existing federated learning approaches struggle to do effectively.

Elsevier Security Journals
security
Mar 16, 2026

This is a review article examining how Large Language Models (LLMs, AI systems trained on vast amounts of text to understand and generate language) are being used in cybersecurity to analyze malware (harmful software designed to damage systems). The article surveys recent research on using LLMs for malware detection, understanding malicious code structure, reverse engineering (the process of analyzing compiled software to understand how it works), and identifying patterns of malicious behavior.

Elsevier Security Journals
research
Mar 16, 2026

Federated learning (a system where multiple computers train AI models together without sharing their raw data) faces two major security problems: attackers can steal information from the local models that clients upload, and malicious clients can sabotage the training by sending bad models. This paper proposes VFEFL, a new federated learning scheme that uses verifiable functional encryption (a type of encryption that lets you check if calculations on encrypted data are correct without decrypting it) to protect client data privacy while detecting and defending against attacks from dishonest participants.

Fix: The paper proposes VFEFL (a privacy-preserving federated learning scheme based on verifiable functional encryption) as the solution. According to the source, VFEFL 'employ[s] a verifiable functional encryption scheme to encrypt local models in the federated learning, ensuring data privacy and correctness during encryption and decryption' and 'enables verifiable client-side aggregated weights and can be integrated into standard federated learning architectures to enhance trust.' The source states that 'experimental results demonstrate that VFEFL effectively defends against such attacks while preserving model privacy' under both targeted and untargeted poisoning attacks.

Elsevier Security Journals
Mar 16, 2026

This research paper proposes FIPAPNet, a machine learning system designed to classify malware when only a few samples are available, which is important because new malware variants often appear with limited examples. The system uses few-shot learning (a technique where AI learns from minimal training data) combined with dynamic features like system call sequences to achieve 93% accuracy in early-stage malware detection. This approach helps security defenders respond quickly to zero-day attacks (new, previously unknown malware) without needing hundreds of samples to retrain their detection models.

Elsevier Security Journals
security
Mar 16, 2026

Vuln2Action is an LLM-based framework designed to help security testers reproduce vulnerabilities and map exploits more systematically. The paper addresses a key challenge in penetration testing (controlled simulations of cyberattacks to find security weaknesses): vulnerability reproduction is time-consuming and relies heavily on manual expertise, yet publicly available exploits exist for less than 1% of known vulnerabilities. While LLMs show promise for analyzing large amounts of threat data, the authors found that current models often refuse to provide exploit-related guidance due to built-in safety restrictions.

Elsevier Security Journals
security
Mar 16, 2026

This paper discusses the growing challenge of malware (malicious software designed to exploit computer system vulnerabilities) detection, noting that over 450,000 new malware samples are detected daily as of 2024. Traditional detection methods like signature-based detection (matching known byte patterns against a database) and behavior-based detection (running malware in isolated test environments to observe its actions) have limitations: signature-based methods fail against new or disguised malware, while behavior-based methods are computationally expensive and can be evaded by malware that detects virtual environments. The paper proposes using machine learning and deep learning approaches trained on features from both static and dynamic analysis to better classify files as malicious or benign.

Elsevier Security Journals
privacy
Mar 16, 2026

This paper discusses differential privacy (DP, a mathematical method that adds noise to data to protect individual privacy while keeping data useful), which is stronger than traditional anonymization techniques like generalization and suppression. The authors address a key challenge: existing DP methods struggle with high-dimensional data (datasets with many features) and treat all data features equally, even though real-world data has varying privacy needs, such as medical records where disease diagnoses need more protection than age.

Elsevier Security Journals