aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,650
[LAST_24H]
1
[LAST_7D]
156
Daily BriefingSunday, March 29, 2026
>

Bluesky Launches AI-Powered Feed Customization Tool: Bluesky released Attie, an AI assistant that lets users create custom content feeds by describing what they want in plain language rather than adjusting technical settings. The tool runs on Claude (Anthropic's language model) and will integrate into apps built on Bluesky's AT Protocol.

Latest Intel

page 29/265
VIEW ALL
01

Machine Learning for Cybersecurity: A Comprehensive Literature Review

research
Mar 16, 2026

This is a literature review article published in an academic journal that surveys how machine learning (algorithms that learn patterns from data to make predictions) is being applied to cybersecurity problems. The article covers research across the field but does not describe a specific security vulnerability or incident requiring a fix.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
ACM Digital Library (TOPS, DTRAP, CSUR)
02

Selective Forgetting in Machine Learning and Beyond: A Survey

researchsafety
Mar 16, 2026

This is a survey article that reviews research on selective forgetting in machine learning, which is the ability to remove or reduce specific information from a trained AI model without completely retraining it from scratch. The article covers methods and applications of this technique across various AI systems and domains. The survey appears to be an academic overview of current knowledge in this area rather than describing a specific problem or vulnerability.

ACM Digital Library (TOPS, DTRAP, CSUR)
03

A Systematic Review on Human Roles, Solutions, and Methodological Approaches to Address Bias in AI

researchsafety
Mar 16, 2026

This academic review examines how bias (systematic unfairness in AI decision-making) occurs in AI systems and explores the human roles, solutions, and research methods used to identify and reduce it. The paper surveys existing approaches to addressing bias rather than proposing a single new solution.

ACM Digital Library (TOPS, DTRAP, CSUR)
04

Responsible AI Question Bank for Risk Assessment

safetyresearch
Mar 16, 2026

This is an academic survey article published in ACM Computing Surveys that discusses a question bank designed to help assess risks in AI systems responsibly. The article appears to be a comprehensive review of how organizations can evaluate potential harms and safety concerns when developing or deploying AI, rather than describing a specific vulnerability or problem.

ACM Digital Library (TOPS, DTRAP, CSUR)
05

Building Trust in Artificial Intelligence: A Systematic Review through the Lens of Trust Theory

researchsafety
Mar 16, 2026

This academic paper is a systematic review published in ACM Computing Surveys that examines how trust works in artificial intelligence systems using established trust theory frameworks. The article analyzes trust in AI through theoretical lenses rather than addressing a specific technical vulnerability or problem.

ACM Digital Library (TOPS, DTRAP, CSUR)
06

Detecting Training Data For Large Language Models: A Survey

securityresearch
Mar 16, 2026

This survey article reviews methods for detecting training data used to build large language models (LLMs, which are AI systems trained on massive amounts of text to generate human-like responses). The paper examines various techniques that researchers have developed to identify and extract information about what data was used to train these models, which is important for understanding model behavior and potential privacy concerns.

ACM Digital Library (TOPS, DTRAP, CSUR)
07

Bias-Free? An Empirical Study on Ethnicity, Gender, and Age Fairness in Deepfake Detection

researchsafety
Mar 16, 2026

This research paper studies whether deepfake detection systems (AI tools that identify fake videos made to look real) are fair across different groups of people based on ethnicity, gender, and age. The study found that these detection systems often perform differently depending on the person's background, meaning they work better for some groups than others. The paper highlights that bias in deepfake detection is an important fairness problem that needs attention.

ACM Digital Library (TOPS, DTRAP, CSUR)
08

Adaptive Real-Time Financial Fraud Detection with Explainable AI Tools

researchsecurity
Mar 16, 2026

This academic paper discusses using explainable AI (AI systems that can show their reasoning for decisions) to detect financial fraud as it happens in real time. The research focuses on making fraud detection systems that adapt to new fraud patterns while also being transparent about why they flag transactions as suspicious.

ACM Digital Library (TOPS, DTRAP, CSUR)
09

Enhancing Digital Security: A Novel Dual-Paradigm Approach for Robust Deepfake Detection Using Pre and Post Quantum-Trained Neural Networks

researchsecurity
Mar 16, 2026

This research paper proposes a new method for detecting deepfakes (AI-generated fake videos or images) by using neural networks (computer systems loosely modeled on how brains learn) trained with both current and quantum computing approaches. The dual approach aims to make deepfake detection more reliable and harder for attackers to bypass.

ACM Digital Library (TOPS, DTRAP, CSUR)
10

Hybrid Machine Learning–Based Trust Management Approach to Secure the Mobile Crowdsourcing

securityresearch
Mar 16, 2026

This research article proposes a hybrid machine learning approach to improve trust management and security in mobile crowdsourcing (a system where mobile users contribute data or complete tasks for a distributed project). The study combines multiple machine learning techniques to identify trustworthy participants and protect against malicious actors in crowdsourcing environments.

ACM Digital Library (TOPS, DTRAP, CSUR)
Prev1...2728293031...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026