aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 83/371
VIEW ALL
01

CVE-2026-33865: MLflow is vulnerable to Stored Cross-Site Scripting (XSS) caused by unsafe parsing of YAML-based MLmodel artifacts in it

security
Apr 7, 2026

MLflow has a stored XSS vulnerability (cross-site scripting, where malicious code hidden in data executes when viewed in a web browser) in how it handles YAML-based MLmodel artifact files. An authenticated attacker can upload a specially crafted MLmodel file that runs malicious code when another user views it in the web interface, potentially letting the attacker hijack sessions or perform actions as that user. This affects MLflow version 3.10.1 and earlier.

NVD/CVE Database
02

Zero‑click Grafana AI attack can enable enterprise data exfiltration

securitysafety
Apr 7, 2026

GrafanaGhost is a critical vulnerability in Grafana (a data visualization platform) that uses indirect prompt injection (tricking an AI by hiding malicious instructions in data it processes) to steal sensitive enterprise data without requiring user authentication or interaction. Attackers chain together multiple exploits, including bypassing URL validation and AI safety guardrails, to trick Grafana's AI into sending confidential information to attacker-controlled servers.

Fix: Grafana has rolled out a fix for this issue. Additionally, security experts recommend: identifying exposure by checking whether Grafana AI/LLM features are enabled, patching to the latest version, restricting "img-src" (image source permissions) to known domains, and applying egress controls (network rules that limit outbound data traffic).

CSO Online
03

Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign

security
Apr 7, 2026

Attackers are targeting over 1,000 publicly accessible ComfyUI instances (a platform for running AI image generation) with an automated scanner that exploits a misconfiguration allowing unauthenticated remote code execution (the ability to run commands on a system without permission). Once compromised, these systems are enrolled in botnets (networks of infected computers controlled remotely) to mine cryptocurrency and serve as proxies.

The Hacker News
04

OpenAI encourages firms to trial four-day weeks to adapt to AI era

policyindustry
Apr 7, 2026

OpenAI has published policy proposals suggesting that companies should trial four-day work weeks as AI tools become more capable and potentially displace workers from jobs. The company argues that AI systems will soon complete projects in days that currently take months, and recommends employers offer benefits like reduced work hours without pay cuts, increased retirement contributions, and subsidized childcare to help workers adapt to this shift.

BBC Technology
05

Broadcom shares jump before the bell as chipmaker agrees Google and Anthropic deals

industry
Apr 7, 2026

Broadcom, a chip designer, announced new deals to produce AI chips for Google and expanded its partnership with Anthropic (an AI company), causing its stock price to rise 3.7% in premarket trading. The deals include revenue commitments and access to computing capacity, which analysts believe signal strong future demand for custom AI chips and may ease investor concerns about competition.

CNBC Technology
06

Gemini is making it faster for distressed users to reach mental health resources 

safetypolicy
Apr 7, 2026

Google has redesigned Gemini's crisis response feature to make it faster for users in distress to access mental health resources. When the chatbot detects a conversation indicating potential suicide or self-harm risk, it now presents a streamlined 'Help is available' module that connects users to crisis resources like suicide hotlines or crisis text lines more quickly.

Fix: Google updated Gemini to streamline its crisis response into a 'one-touch' module (based on the partial text provided, the exact mechanism is not fully detailed in the source). The system detects conversations indicating suicide or self-harm risk and launches the 'Help is available' module to direct users to mental health crisis resources.

The Verge (AI)
07

The noisy tenants: Engineering fairness in multi-tenant SIEM solutions

securityresearch
Apr 7, 2026

Multi-tenant SIEM (security information and event management, a platform that collects and analyzes security data from many sources) solutions share physical resources like CPU and memory among different customers, creating a "noisy neighbor" problem where one customer's heavy workload can slow down threat detection for others and violate service promises. While vendors market cloud-based SIEM as efficient and reliable, most don't publicly discuss how they prevent this fairness issue, which requires sophisticated engineering strategies like fair-share scheduling (giving each customer a proportional share of resources) and intelligent queuing rather than simple rate-limiting.

CSO Online
08

CVE-2026-1839: A vulnerability in the HuggingFace Transformers library, specifically in the `Trainer` class, allows for arbitrary code

security
Apr 7, 2026

A vulnerability in HuggingFace Transformers' `Trainer` class (a tool for training AI models) allows attackers to run arbitrary code by providing a malicious checkpoint file. The problem occurs because the `_load_rng_state()` method uses `torch.load()` without the `weights_only=True` parameter (a safety setting that restricts what code can run), leaving systems vulnerable when using PyTorch versions below 2.6.

Fix: The issue is resolved in version v5.0.0rc3.

NVD/CVE Database
09

Flowise AI Agent Builder Under Active CVSS 10.0 RCE Exploitation; 12,000+ Instances Exposed

security
Apr 7, 2026

Flowise, an open-source AI platform, has a maximum-severity vulnerability (CVE-2025-59528, CVSS score 10.0) in its CustomMCP node that allows attackers to execute arbitrary JavaScript code on the server without validation, potentially leading to full system compromise and data theft. The flaw requires only an API token to exploit and is being actively exploited in the wild against over 12,000 exposed Flowise instances.

Fix: The vulnerability was addressed in version 3.0.6 of the npm package. Users should upgrade to this version or later.

The Hacker News
10

Anthropic Claude Mythos Preview: The More Capable AI Becomes, the More Security It Needs

securityindustry
Apr 7, 2026

As AI models become more powerful, they create both greater risks and opportunities for security. CrowdStrike argues that while companies like Anthropic build safer models, organizations also need deployment governance (security controls for how and where AI runs in a company) to protect data and systems when AI agents access databases, workflows, and sensitive information. CrowdStrike offers tools for discovering all AI applications in use, monitoring what data they access, and preventing sensitive information from being exposed through AI workflows.

CrowdStrike Blog
Prev1...8182838485...371Next