aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 241/371
VIEW ALL
01

CVE-2025-11200: MLflow Weak Password Requirements Authentication Bypass Vulnerability. This vulnerability allows remote attackers to byp

security
Oct 29, 2025

CVE-2025-11200 is a vulnerability in MLflow that allows remote attackers to bypass authentication (gain access without logging in) because the system has weak password requirements (passwords that are too easy to guess or crack). Attackers can exploit this flaw to access MLflow installations without needing valid credentials.

Fix: A patch is available at the following GitHub commit: https://github.com/mlflow/mlflow/commit/1f74f3f24d8273927b8db392c23e108576936c54

NVD/CVE Database
02

AI Safety Newsletter #65: Measuring Automation and Superintelligence Moratorium Letter

policyresearch
Oct 29, 2025

A new benchmark called the Remote Labor Index (RLI) measures whether AI systems can automate real computer work tasks across different professions, showing that current AI agents can only fully automate 2.5% of projects despite improving over time. Additionally, over 50,000 people, including top scientists and Nobel laureates, signed an open letter calling for a moratorium (temporary ban) on developing superintelligence (a hypothetical AI system far more capable than humans) until it can be proven safe and controllable.

CAIS AI Safety Newsletter
03

CVE-2025-12058: The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vuln

security
Oct 29, 2025

CVE-2025-12058 is a vulnerability in Keras (a machine learning library) where the load_model method can be tricked into reading files from a computer's local storage or making network requests to external servers, even when the safe_mode=True security flag is enabled. The problem occurs because the StringLookup layer (a component that converts text into numbers) accepts file paths during model loading, and an attacker can craft a malicious .keras file (a model storage format) to exploit this weakness.

NVD/CVE Database
04

Claude Pirate: Abusing Anthropic's File API For Data Exfiltration

security
Oct 28, 2025

Anthropic added network request capabilities to Claude's Code Interpreter, which creates a security risk for data exfiltration (unauthorized stealing of sensitive information). An attacker, either controlling the AI model or using indirect prompt injection (hidden malicious instructions in a document the AI processes), could abuse Anthropic's own APIs to steal data that a user has access to, rather than using typical methods like hidden links.

Embrace The Red
05

A Systematic Literature Review on SWOT Analysis of Prompt Engineering Techniques

research
Oct 28, 2025

This article reviews prompt engineering (the practice of designing inputs like questions or instructions to guide AI systems toward better responses) and analyzes its strengths, weaknesses, opportunities, and threats using a SWOT framework. The review covers how prompt engineering can improve interactions with large language models (advanced AI systems trained on vast amounts of text) across industries like healthcare and education, while also identifying challenges around maintaining accuracy and efficiency.

IEEE Xplore (Security & AI Journals)
06

Lightweight Reparameterizable Integral Neural Networks for Mobile Applications

research
Oct 27, 2025

This paper presents RINNs (reparameterizable integral neural networks), a new type of AI model designed to run efficiently on mobile devices with limited computing power. The key innovation is a reparameterization strategy that converts the complex mathematical structure used during training into a simpler feed-forward structure (a straightforward sequence of processing steps) at inference time, allowing these models to achieve high accuracy (79.1%) while running very fast (0.87 milliseconds) on mobile hardware.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-8709: A SQL injection vulnerability exists in the langchain-ai/langchain repository, specifically in the LangGraph's SQLite st

security
Oct 26, 2025

A SQL injection vulnerability (a type of attack where an attacker inserts malicious SQL code into an application) exists in LangGraph's SQLite storage system, specifically in version 2.0.10 of langgraph-checkpoint-sqlite. The vulnerability happens because the code directly combines user input with SQL commands instead of safely separating them, allowing attackers to steal sensitive data like passwords and API keys, and bypass security protections.

NVD/CVE Database
08

v0.14.6

security
Oct 25, 2025

LlamaIndex v0.14.6 is a software update released on October 26, 2025, that fixes various bugs across multiple components including support for parallel tool calls, metadata handling, embedding format compatibility, and SQL injection vulnerabilities (using parameterized queries instead of raw SQL string concatenation). The release also adds new features like async support for retrievers and integrations with new services like Helicone.

Fix: The source explicitly mentions one security fix: 'Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore' (llama-index-storage-kvstore-postgres #20104). Users should update to v0.14.6 to receive this and other bug fixes. No other specific mitigation steps are described in the release notes.

LlamaIndex Security Releases
09

CVE-2025-62612: FastGPT is an AI Agent building platform. Prior to version 4.11.1, in the workflow file reading node, the network link i

security
Oct 22, 2025

FastGPT, an AI Agent building platform, had a vulnerability in its workflow file reading node where network links were not properly verified, creating a risk of SSRF attacks (server-side request forgery, where an attacker tricks the server into making unwanted requests to other systems). The vulnerability affected versions before 4.11.1.

Fix: Update FastGPT to version 4.11.1 or later, as this issue has been patched in that version.

NVD/CVE Database
10

CVE-2025-11844: Hugging Face Smolagents version 1.20.0 contains an XPath injection vulnerability in the search_item_ctrl_f function loca

security
Oct 22, 2025

Hugging Face Smolagents version 1.20.0 has an XPath injection vulnerability (a security flaw where attackers can inject malicious code into XPath queries, which are used to search and navigate document structures) in its web browser function. The vulnerability exists because user input is directly inserted into XPath queries without being cleaned, allowing attackers to bypass search filters, access unintended data, and disrupt automated web tasks.

Fix: The issue is fixed in version 1.22.0. Users should upgrade Hugging Face Smolagents to version 1.22.0 or later.

NVD/CVE Database
Prev1...239240241242243...371Next