aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 105/371
VIEW ALL
01

CVE-2026-4963: A weakness has been identified in huggingface smolagents 1.25.0.dev0. This affects the function evaluate_augassign/evalu

security
Mar 27, 2026

A code injection vulnerability (CVE-2026-4963) was found in huggingface smolagents version 1.25.0.dev0, specifically in functions within the local_python_executor.py file that were supposed to fix a previous vulnerability. An attacker can exploit this flaw remotely by injecting malicious code, and the exploit is publicly available, though the vendor has not responded to disclosure attempts.

NVD/CVE Database
02

CVE-2025-15381: In the latest version of mlflow/mlflow, when the `basic-auth` app is enabled, tracing and assessment endpoints are not p

security
Mar 27, 2026

In MLflow (a machine learning tool for managing experiments), when basic authentication is enabled, certain endpoints that show trace information (a record of how the AI made decisions) and allow users to assess traces are not properly checking user permissions. This means any logged-in user can view traces and create assessments even if they shouldn't have access to them, risking exposure of sensitive information and unauthorized changes.

NVD/CVE Database
03

GHSA-w9f8-gxf9-rhvw: Open WebUI's Insecure Direct Object Reference (IDOR) allows access to other users' memories

security
Mar 27, 2026

Open WebUI has an insecure direct object reference (IDOR, a flaw where an app doesn't properly check if a user should access specific data) in its retrieval API that lets any authenticated user read other users' private memories and uploaded files by guessing collection names like 'user-memory-{USER_UUID}' or 'file-{FILE_UUID}'. The vulnerability exists because the API checks that a user is logged in, but doesn't verify they own the data they're requesting.

GitHub Advisory Database
04

GHSA-jjp7-g2jw-wh3j: Open WebUI's process_files_batch() endpoint missing ownership check, allows unauthorized file overwrite

security
Mar 27, 2026

Open WebUI's file batch processing endpoint lacks an ownership check, allowing any authenticated user to overwrite files in shared knowledge bases by knowing their IDs. An attacker can then poison the RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) system, causing the LLM to serve the attacker's malicious content to other users.

Fix: Add an ownership verification check before writing files. The source suggests this code: for file in form_data.files: db_file = Files.get_file_by_id(file.id) if not db_file or (db_file.user_id != user.id and user.role != "admin"): file_errors.append(BatchProcessFilesResult( file_id=file.id, status="failed", error="Permission denied: not file owner", )) continue This verifies that only the file's owner or an admin can modify it before the write operation proceeds.

GitHub Advisory Database
05

Cybersecurity stocks fall on report Anthropic is testing a powerful new model

industry
Mar 27, 2026

Anthropic is testing a new AI model called Mythos that has advanced cybersecurity capabilities but also poses security risks, causing the company to plan a slow rollout. The announcement led to significant stock price drops for major cybersecurity companies, as investors worry that powerful AI tools could make hacking easier and disrupt the cybersecurity industry.

CNBC Technology
06

GHSA-vvxm-vxmr-624h: Open WebUI vulnerable to Path Traversal in `POST /api/v1/audio/transcriptions`

security
Mar 27, 2026

Open WebUI's speech-to-text endpoint has a path traversal vulnerability where an authenticated user can craft a malicious filename to trigger an error that leaks the server's absolute file path. The vulnerability exists because the code doesn't sanitize the filename before using it in a file operation, unlike similar upload handlers elsewhere in the codebase.

Fix: The source recommends two fixes: (1) sanitize the file extension using `Path(file.filename).name` and `Path(safe_name).suffix.lstrip(".")` instead of the current `split(".")[-1]` approach, and (2) suppress the internal path from error responses by catching exceptions and returning a generic error message ("Transcription failed") instead of returning the full exception details.

GitHub Advisory Database
07

CVE-2026-30304: In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute al

security
Mar 27, 2026

AI Code has a feature that automatically runs terminal commands (direct instructions to a computer's operating system) if it thinks they're safe, but an attacker can use prompt injection (tricking an AI by hiding instructions in its input) to disguise malicious commands as safe ones, causing them to execute without user approval.

NVD/CVE Database
08

CVE-2026-29871: A path traversal vulnerability exists in the awesome-llm-apps project in commit e46690f99c3f08be80a9877fab52acacf7ab8251

security
Mar 27, 2026

A path traversal vulnerability (a security flaw where attackers manipulate file paths to access files they shouldn't) exists in the awesome-llm-apps project's Beifong AI News and Podcast Agent backend. An unauthenticated attacker can exploit this weakness in the stream-audio endpoint to read arbitrary files from the server, potentially exposing sensitive data like configuration files and credentials.

NVD/CVE Database
09

In Other News: Palo Alto Recruiter Scam, Anti-Deepfake Chip, Google Sets 2029 Quantum Deadline

safetyindustry
Mar 27, 2026

This article briefly mentions several security-related news items including a Heritage Bank data breach, a new State Department cyber threat unit, and LA Metro disruptions, along with stories about a Palo Alto recruiter scam, an anti-deepfake chip (technology designed to detect AI-generated fake videos), and Google's quantum computing deadline for 2029. The content provided is minimal and does not go into detail about any of these incidents.

SecurityWeek
10

Elon Musk’s Grok ordered to stop creating AI nudes by Dutch court as legal pressure mounts

safetypolicy
Mar 27, 2026

A Dutch court has ordered Elon Musk's xAI and its chatbot Grok to stop creating non-consensual AI-generated sexual images of adults and children, with daily fines of 100,000 euros for non-compliance. The ruling came after the non-profit Offlimits reported that Grok generated an estimated three million sexualized images in about two weeks, including over 23,000 depicting children, and found that xAI's previous restrictions on creating such images were easily bypassed. The case adds to mounting legal pressure on xAI, with investigations underway in Europe and lawsuits filed in the United States.

Fix: xAI moved to block Grok from being able to create sexualized images of real people on X in January, with the restriction applying to all users, including paid subscribers. However, the source explicitly states this measure was found insufficient by the court, as Offlimits demonstrated the restrictions were easily bypassed.

CNBC Technology
Prev1...103104105106107...371Next