aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
22
[LAST_7D]
174
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 134/276
VIEW ALL
01

CVE-2025-14924: Hugging Face Transformers megatron_gpt2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vuln

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers (a popular library for working with AI language models) allows attackers to run arbitrary code on a computer by tricking users into opening malicious files or visiting malicious websites. The flaw occurs because the software doesn't properly check data when loading saved model checkpoints (files that store a model's learned parameters), which lets attackers execute code by sending untrusted data through deserialization (the process of converting stored data back into usable objects).

Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
NVD/CVE Database
02

CVE-2025-14921: Hugging Face Transformers Transformer-XL Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. Th

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers' Transformer-XL model allows attackers to run arbitrary code (remote code execution) on a victim's computer by tricking them into opening a malicious file or visiting a malicious webpage. The flaw occurs because the software doesn't properly validate data when reading model files, allowing attackers to exploit the deserialization process (converting saved data back into objects that the program can use) to inject and execute malicious code.

NVD/CVE Database
03

CVE-2025-14920: Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vu

security
Dec 23, 2025

A vulnerability in Hugging Face Transformers' Perceiver model allows attackers to run malicious code on a user's computer by tricking them into opening a malicious file or visiting a harmful webpage. The flaw happens because the software doesn't properly check data when loading model files, allowing untrusted code to be executed (deserialization of untrusted data, where a program reconstructs objects from stored data without verifying they're safe).

NVD/CVE Database
04

CVE-2025-13707: Tencent HunyuanDiT model_resume Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerabilit

security
Dec 23, 2025

Tencent HunyuanDiT (an AI image generation model) has a remote code execution vulnerability in its model_resume function that allows attackers to run arbitrary code if a user opens a malicious file or visits a malicious page. The flaw stems from improper validation of user input during deserialization (converting data from storage format back into usable objects), allowing attackers to execute code with root-level privileges.

NVD/CVE Database
05

CVE-2025-63664: Incorrect access control in the /api/v1/conversations/*/messages API of GT Edge AI Platform before v2.0.10-dev allows un

security
Dec 22, 2025

CVE-2025-63664 is a flaw in the GT Edge AI Platform (before version 2.0.10-dev) where incorrect access control in the /api/v1/conversations/*/messages API allows attackers without permission to view other users' message histories with AI agents. This is classified as improper access control (CWE-284, a category of security flaws where systems fail to properly restrict what users can access).

Fix: Update GT Edge AI Platform to version 2.0.10-dev or later.

NVD/CVE Database
06

The Impact of Artificial Intelligence in Protecting the Online Social Community From Cyberbullying

researchsafety
Dec 22, 2025

Cyberbullying on social media is a growing problem that harms people's mental health, and traditional methods to stop it are no longer effective. This study examines how artificial intelligence can help protect online communities from cyberbullying by exploring different AI technologies, their uses, and the challenges involved. The goal is to understand how AI might create safer online environments.

IEEE Xplore (Security & AI Journals)
07

Generative Artificial Intelligence: Ethical Challenges and Trust Mechanisms

researchsafety
Dec 22, 2025

Generative AI (systems that create new text, images, or other content) is transforming many industries but raises ethical concerns like data privacy (protecting personal information), bias (unfair treatment of certain groups), transparency (being open about how the AI works), and accountability (responsibility for the AI's actions). Researchers propose a trust framework based on transparency, fairness, accountability, and privacy to help ensure generative AI is developed and used responsibly.

IEEE Xplore (Security & AI Journals)
08

Large Language Models in Human Subject Research, and the Presence of Idiosyncratic Human Behaviors

researchsafety
Dec 22, 2025

Large language models (LLMs, AI systems trained on huge amounts of text to generate human-like responses) can now mimic not just general human language but also unusual, individual-specific human behaviors. This ability could lead to LLMs being used more widely in research studies and potentially reduce the role of actual humans, which raises concerns about AI alignment (ensuring AI systems behave in ways humans intend and approve of) and how this technology affects society.

IEEE Xplore (Security & AI Journals)
09

Slack Federated Adversarial Training

researchsecurity
Dec 22, 2025

This research addresses a problem in federated learning (a method where multiple computers train an AI model together without sharing raw data) combined with adversarial training (a technique that makes AI models resistant to intentionally tricky inputs). The authors found that simply combining these two approaches causes the model's accuracy to drop because adversarial training increases differences in the data across different computers, making the federated learning less effective. They propose SFAT (Slack Federated Adversarial Training), which uses a relaxation mechanism to adjust how the computers combine their learning results, reducing the harmful effects of data differences and improving overall performance.

IEEE Xplore (Security & AI Journals)
10

Proactive Bot Detection Based on Structural Information Principles

researchsecurity
Dec 22, 2025

This research proposes SIAMD, a framework for detecting social media bots (automated accounts that spread misinformation) before they cause harm. The system analyzes patterns in how user accounts interact with messages, uses structural entropy (a measure of uncertainty in data patterns) to identify bot-like behavior, and generates synthetic bot messages with large language models (AI systems trained on text data) to test and improve detection systems.

IEEE Xplore (Security & AI Journals)
Prev1...132133134135136...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026