aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3286 items

CVE-2025-32444: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and p

criticalvulnerability
security
Apr 30, 2025
CVE-2025-32444

vLLM (a system for running AI models efficiently) versions 0.6.5 through 0.8.4 have a critical vulnerability when using mooncake integration. Attackers can execute arbitrary code remotely because the system uses pickle (an unsafe method for converting data into a format that can be transmitted) over unencrypted ZeroMQ sockets (communication channels) that listen to all network connections, making them easily accessible from the internet.

Fix: Update to vLLM version 0.8.5 or later, which has patched this vulnerability.

NVD/CVE Database

CVE-2025-30202: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and p

highvulnerability
security
Apr 30, 2025
CVE-2025-30202

vLLM versions 0.5.2 through 0.8.4 have a security vulnerability in multi-node deployments where a ZeroMQ socket (a tool for sending messages between different computers) is left open to all network interfaces. An attacker with network access can connect to this socket to see internal vLLM data or deliberately slow down the system by connecting repeatedly without reading the data, causing a denial of service (making the system unavailable or very slow).

CVE-2025-1194: A Regular Expression Denial of Service (ReDoS) vulnerability was identified in the huggingface/transformers library, spe

mediumvulnerability
security
Apr 29, 2025
CVE-2025-1194

A ReDoS vulnerability (regular expression denial of service, where specially crafted text causes a regex to consume excessive CPU by repeatedly backtracking) was found in the huggingface/transformers library version 4.48.1, specifically in the GPT-NeoX-Japanese model's tokenizer. An attacker could exploit this by sending malicious input that causes the application to hang or crash due to high CPU usage.

AI Safety Newsletter #53: An Open Letter Attempts to Block OpenAI Restructuring

inforegulatory
policy
Apr 29, 2025

Former OpenAI employees and experts published an open letter asking California and Delaware officials to block OpenAI's restructuring from a nonprofit organization into a for-profit company (a Public Benefit Corporation, which balances profit with public benefit). The letter argues that the restructuring would eliminate governance safeguards designed to prevent profit motives from influencing decisions about AGI (artificial general intelligence, highly autonomous systems that outperform humans at most economically valuable work), and would shift control away from a nonprofit board accountable to the public toward a board partly accountable to shareholders.

Recap from OWASP Gen AI Security Project’s – NYC Insecure Agents Hackathon

inforesearchIndustry
security

Providers of General-Purpose AI Models — What We Know About Who Will Qualify

inforegulatory
policy
Apr 25, 2025

On April 22, 2025, the European AI Office published preliminary guidelines explaining which companies count as providers of GPAI models (general-purpose AI models, which are AI systems capable of performing many different tasks across various applications). The guidelines cover seven key topics, including defining what a GPAI model is, identifying who qualifies as a provider, handling open-source exemptions, and compliance requirements such as documentation, copyright policies, and security protections for higher-risk models.

CVE-2025-43858: YoutubeDLSharp is a wrapper for the command-line video downloaders youtube-dl and yt-dlp. In versions starting from 1.0.

criticalvulnerability
security
Apr 24, 2025
CVE-2025-43858

YoutubeDLSharp (a tool that wraps command-line video downloaders) has a vulnerability in versions 1.0.0-beta4 through 1.1.1 on Windows where attackers can inject malicious commands by exploiting unsafe argument conversion, especially when a Windows encoding workaround is enabled by default. Users cannot disable this workaround through built-in methods, making all applications using these versions potentially vulnerable.

Securing AI’s New Frontier: The Power of Open Collaboration on MCP Security

inforesearchIndustry
security

v4.9.0

inforesearchIndustry
security

AI Safety Newsletter #52: An Expert Virology Benchmark

inforesearchIndustry
safety

CVE-2025-32434: PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built

criticalvulnerability
security
Apr 18, 2025
CVE-2025-32434

PyTorch (a Python package for machine learning computations) versions 2.5.1 and earlier contain a remote code execution (RCE, where an attacker can run commands on a system they don't own) vulnerability when loading models with the torch.load function set to weights_only=True. The vulnerability stems from insecure deserialization (converting data back into executable code without checking if it's safe), which allows attackers to execute arbitrary commands remotely.

CVE-2025-32377: Rasa Pro is a framework for building scalable, dynamic conversational AI assistants that integrate large language models

mediumvulnerability
security
Apr 18, 2025
CVE-2025-32377

Rasa Pro is a framework for building conversational AI assistants that use large language models. A vulnerability was found where voice connectors (tools that receive audio input) did not properly check user authentication even when security tokens were configured, allowing attackers to send voice data to the system without permission.

OWASP Gen AI Security Project Announces Nine New Sponsors and Major RSA Conference Presence to Advance Generative AI Security

inforesearchIndustry
policy

CVE-2025-3730: A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.func

lowvulnerability
security
Apr 16, 2025
CVE-2025-3730

PyTorch 2.6.0 contains a vulnerability in the torch.nn.functional.ctc_loss function (a component used for speech recognition tasks) that can cause denial of service (making the system unavailable). The vulnerability requires local access to exploit and has been publicly disclosed, though its actual existence is still uncertain.

CVE-2025-3677: A vulnerability classified as critical was found in lm-sys fastchat up to 0.2.36. This vulnerability affects the functio

mediumvulnerability
security
Apr 16, 2025
CVE-2025-3677

A critical vulnerability (CVE-2025-3677) was found in lm-sys FastChat version 0.2.36 and earlier in the file apply_delta.py. The flaw involves deserialization (converting data back into code or objects, which can be dangerous if the data comes from an untrusted source) and can only be exploited by someone with local access to the affected system.

CVE-2025-31363: Mattermost versions 10.4.x <= 10.4.2, 10.5.x <= 10.5.0, 9.11.x <= 9.11.9 fail to restrict domains the LLM can request to

lowvulnerability
security
Apr 16, 2025
CVE-2025-31363

Mattermost (a team communication platform) versions 10.4.2 and earlier, 10.5.0 and earlier, and 9.11.9 and earlier don't properly block which websites their built-in AI tool can contact. This allows logged-in users to use prompt injection (tricking the AI by hiding instructions in their input) to steal data from servers that the Mattermost system can access.

AI Safety Newsletter #51: AI Frontiers

infonews
policysafety

CVE-2025-3579: In versions prior to Aidex 1.7, an authenticated malicious user, taking advantage of an open registry, could execute una

highvulnerability
security
Apr 15, 2025
CVE-2025-3579

In Aidex versions before 1.7, a logged-in attacker could exploit an open registry to run unauthorized commands on the system through prompt injection attacks (tricking the AI by hiding malicious instructions in user input) via the chat message endpoint. This allowed them to execute operating system commands, access databases, and invoke framework functions.

CVE-2025-32383: MaxKB (Max Knowledge Base) is an open source knowledge base question-answering system based on a large language model an

mediumvulnerability
security
Apr 10, 2025
CVE-2025-32383

MaxKB (Max Knowledge Base) is an open source system that answers questions using a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions). A reverse shell vulnerability (a security flaw that lets attackers gain control of a system remotely) exists in its function library module and can be exploited by privileged users to create unauthorized access.

CVE-2025-32375: BentoML is a Python library for building online serving systems optimized for AI apps and model inference. Prior to 1.4.

criticalvulnerability
security
Apr 9, 2025
CVE-2025-32375EPSS: 67.3%

BentoML is a Python library for building AI model serving systems, but versions before 1.4.8 had a vulnerability in its runner server that allowed attackers to execute arbitrary code (unauthorized commands) by sending specially crafted requests with specific headers and parameters, potentially giving them full access to the server and its data.

Previous99 / 165Next

Fix: This issue has been patched in version 0.8.5. Update vLLM to version 0.8.5 or later.

NVD/CVE Database
NVD/CVE Database
CAIS AI Safety Newsletter
research
Apr 25, 2025

AI agents (automated systems that can take actions based on AI decisions) are easy to build with modern tools, but they face several security threats. The OWASP Gen AI Security Project held a hackathon in New York where participants intentionally created insecure agents to identify common security problems.

OWASP GenAI Security
EU AI Act Updates

Fix: Update to version 1.1.2, which contains the patch for this vulnerability.

NVD/CVE Database
safety
Apr 22, 2025

As AI systems start connecting to real tools and databases through the Model Context Protocol (MCP, a system that lets AI models interact with external applications and data), new security risks appear that older security methods cannot fully handle. The OWASP GenAI Security Project has released research on how to secure MCP, offering defense-in-depth strategies (a layered security approach using multiple protective measures) to help developers build safer AI applications that can act independently in real time.

OWASP GenAI Security
research
Apr 22, 2025

Version 4.9.0 is a release of the MITRE ATLAS framework, which documents attack techniques and defenses specific to AI systems. The update adds new attack methods like reverse shells (unauthorized remote access to a system), model corruption, and supply chain attacks targeting AI tools, while also updating existing security techniques and adding real-world case studies of AI-related security breaches.

MITRE ATLAS Releases
research
Apr 22, 2025

Researchers created the Virology Capabilities Test (VCT), a benchmark measuring how well AI systems can solve complex virology lab problems, and found that leading AI models like OpenAI's o3 now outperform human experts in specialized virology knowledge. This is concerning because virology knowledge has dual-use potential, meaning the same capabilities that could help prevent disease could also be misused by bad actors to develop dangerous pathogens.

Fix: The authors recommend that highly dual-use virology capabilities should be excluded from publicly-available AI systems, and know-your-customer mechanisms (verification processes to confirm who customers are and what they'll use the technology for) could ensure these capabilities remain accessible only to researchers in institutions with appropriate safety protocols. As a result of the paper, xAI has added new safeguards to their systems.

CAIS AI Safety Newsletter

Fix: This issue has been patched in version 2.6.0. Users should upgrade PyTorch to version 2.6.0 or later.

NVD/CVE Database

Fix: This issue has been patched in versions 3.9.20, 3.10.19, 3.11.7 and 3.12.6 for the audiocodes, audiocodes_stream, and genesys connectors. Update Rasa Pro to one of these versions or later.

NVD/CVE Database
industry
Apr 17, 2025

The OWASP Generative AI Security Project, an organization focused on application security, announced nine new corporate sponsors to support efforts in improving security for generative AI technologies. The sponsors, including companies like ByteDance and Trend Micro, represent increased investment and momentum in making AI systems more secure.

OWASP GenAI Security

Fix: Apply patch 46fc5d8e360127361211cb237d5f9eef0223e567. The project's security policy also recommends avoiding unknown models, which could have malicious effects.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
Apr 15, 2025

The AI Safety Newsletter highlights the launch of AI Frontiers, a new publication featuring expert commentary on critical AI challenges including national security risks, resource access inequality, risk management approaches, and governance of autonomous systems (AI agents that can make decisions without human input). The newsletter presents diverse viewpoints on how society should navigate AI's wide-ranging impacts on jobs, health, and security.

CAIS AI Safety Newsletter

Fix: Update to Aidex version 1.7 or later.

NVD/CVE Database

Fix: This vulnerability is fixed in v1.10.4-lts. Users should update to this version or later.

NVD/CVE Database

Fix: Update BentoML to version 1.4.8 or later, where this vulnerability is fixed.

NVD/CVE Database