aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 97/371
VIEW ALL
01

The Galaxy S26’s photo app can sloppify your memories

safety
Mar 31, 2026

Samsung's Galaxy S26 Photo Assist tool uses AI to let users edit photos with natural language requests, similar to Google's earlier photo editing features. However, the tool can be manipulated to generate misleading or harmful images, like fake disaster scenes, because its safety guardrails can be bypassed through prompt injection (tricking the AI by hiding instructions in user input).

The Verge (AI)
02

VRP 2025 Year in Review

securityindustry
Mar 31, 2026

Google's Vulnerability Reward Program (VRP), which pays researchers to find security bugs in Google products, celebrated its 15th anniversary in 2025 by awarding over $17 million to more than 700 security researchers worldwide. Major 2025 developments included launching a dedicated AI VRP (a separate program focused specifically on AI security flaws), adding AI reward categories to Chrome VRP, and creating a patch rewards program for OSV-SCALIBR (an open source tool that scans software for vulnerabilities). Google also hosted multiple bugSWAT events (live hacking competitions) throughout the year, which generated hundreds of bug reports and distributed over $2.9 million in rewards.

Google Online Security Blog
03

CVE-2026-22561: Uncontrolled search path elements in Anthropic Claude for Windows installer (Claude Setup.exe) versions prior to 1.1.336

security
Mar 31, 2026

CVE-2026-22561 is a vulnerability in Anthropic Claude for Windows installer (Claude Setup.exe) versions before 1.1.336 that allows local privilege escalation through DLL search-order hijacking (a technique where an attacker places a malicious library file in a directory where the installer looks for code, causing it to run the attacker's code instead of the legitimate one). After the installer gains elevated permissions, it loads DLL files from its own directory, which means an attacker can plant a malicious DLL alongside the installer to execute arbitrary code.

Fix: Update to Claude for Windows installer version 1.1.336 or later.

NVD/CVE Database
04

Penguin to sue OpenAI over ChatGPT version of German children’s book

securitypolicy
Mar 31, 2026

Penguin Random House sued OpenAI, claiming that ChatGPT (an AI chatbot, or conversational AI system) violated copyright by reproducing content similar to their German children's book series, Coconut the Little Dragon. The lawsuit was filed in Munich court against OpenAI's European subsidiary after the publisher's legal team tested whether ChatGPT could generate stories matching the style of the original books.

The Guardian Technology
05

Landmark losses for Meta and YouTube as big tech misses the point

safetypolicy
Mar 31, 2026

Meta and YouTube both lost landmark legal cases this week involving claims that their platforms cause social media addiction (compulsive use similar to drug dependency). While the cases don't settle whether social media is clinically addictive, courts have determined that the companies can be held legally responsible for the harm caused.

The Guardian Technology
06

CVE-2026-34163: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, FastGPT's MCP (Model Context Protocol) tools endpoi

security
Mar 31, 2026

FastGPT, a platform for building AI agents, has a vulnerability in versions before 4.14.9.5 where two endpoints (/api/core/app/mcpTools/getTools and /api/core/app/mcpTools/runTool) accept URLs from users and make requests to them without checking if those URLs point to internal systems. This is called SSRF (server-side request forgery, where an attacker tricks a server into making requests to private networks on their behalf). Although FastGPT has a protective function called isInternalAddress() used elsewhere, these endpoints don't use it, allowing authenticated attackers to scan internal networks, access cloud metadata services, and interact with internal databases like MongoDB and Redis.

Fix: This issue has been patched in version 4.14.9.5.

NVD/CVE Database
07

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

security
Mar 31, 2026

FastGPT, an AI Agent building platform, has a vulnerability in versions before 4.14.9.5 where an HTTP tools testing endpoint (/api/core/app/httpTools/runTool) lacks authentication (missing access controls). This endpoint acts as a proxy that accepts user-supplied requests and makes server-side HTTP calls, potentially allowing unauthorized attackers to make requests on behalf of the FastGPT server.

Fix: Update FastGPT to version 4.14.9.5 or later, which patches this vulnerability.

NVD/CVE Database
08

CVE-2026-0596: A command injection vulnerability exists in mlflow/mlflow when serving a model with `enable_mlserver=True`. The `model_u

security
Mar 31, 2026

MLflow (a machine learning model management tool) has a command injection vulnerability (a security flaw where an attacker can insert shell commands into input) when serving models with `enable_mlserver=True`. The vulnerability occurs because the `model_uri` (a file path or reference to a model) is directly placed into a shell command without filtering out dangerous characters like `$()` or backticks, allowing attackers to run unauthorized commands. This poses a serious risk if a high-privilege service loads models from a directory that lower-privilege users can access.

NVD/CVE Database
09

Art schools are being torn apart by AI

policy
Mar 31, 2026

Art schools are changing their curriculum to include generative AI (AI systems that create new images, animations, or designs based on descriptions), but students and creative professionals are concerned about how this affects job competition and the future of traditional artistic skills. The article highlights growing worry among art students that AI tools will make it harder to find postgraduate jobs in creative fields.

The Verge (AI)
10

CVE-2026-30310: In its design for automatic terminal command execution, Sixth offers two options: Execute safe commands and Execute all

security
Mar 31, 2026

Sixth, an AI tool that can run terminal commands automatically, has a security flaw in its safety check feature. An attacker can use prompt injection (tricking the AI by hiding instructions in its input) to disguise harmful commands as safe ones, causing the AI to run them without asking the user for permission first.

NVD/CVE Database
Prev1...9596979899...371Next