aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 84/371
VIEW ALL
01

Adaptive Density Clustering for Data-Driven Password Mangling Rule Generation

researchsecurity
Apr 6, 2026

This research paper describes a method for automatically generating password mangling rules (transformations that modify passwords systematically) using adaptive density clustering (a technique that groups similar data points together based on how densely packed they are). The approach aims to improve password security by learning patterns from real password data to create more effective rules for testing password strength.

Elsevier Security Journals
02

Broadcom agrees to expanded chip deals with Google, Anthropic

industry
Apr 6, 2026

Broadcom has agreed to produce AI chips for Google and signed an expanded deal with Anthropic, giving the AI startup access to about 3.5 gigawatts of computing capacity (the amount of processing power available at one time) using Google's custom processors called TPUs (tensor processing units, which are specialized chips designed to run AI models). This reflects growing demand for the computing infrastructure needed to run generative AI (AI systems that create new text, images, or other content) at scale.

CNBC Technology
03

OpenAI asks California, Delaware to investigate Musk's 'anti-competitive behavior' ahead of April trial

policyindustry
Apr 6, 2026

OpenAI has asked California and Delaware attorneys general to investigate what it calls 'anti-competitive behavior' by Elon Musk, claiming he is working to undermine the company through attacks and coordination with other rivals ahead of an April trial. OpenAI alleges that Musk has conducted opposition research on CEO Sam Altman, spread false allegations, and is using legal efforts to benefit his competing AI company xAI, which faces its own investigations for generating non-consensual explicit deepfake content.

CNBC Technology
04

CVE-2026-35022: Anthropic Claude Code CLI and Claude Agent SDK contain an OS command injection vulnerability in authentication helper ex

security
Apr 6, 2026

Anthropic's Claude Code CLI and Claude Agent SDK have a vulnerability where authentication helper settings are executed with shell=true (allowing shell commands to run) without checking the input first. An attacker who can change settings like apiKeyHelper or awsAuthRefresh could inject shell metacharacters (special characters that have meaning in command shells) to run arbitrary commands with the user's privileges, potentially stealing credentials or accessing environment variables.

NVD/CVE Database
05

CVE-2026-35021: Anthropic Claude Code CLI and Claude Agent SDK contain an OS command injection vulnerability in the prompt editor invoca

security
Apr 6, 2026

Anthropic's Claude Code CLI and Claude Agent SDK have a vulnerability where attackers can execute arbitrary commands (run any code they want) by inserting shell metacharacters (special characters like $() that tell the system to run commands) into file paths. Even though the code tries to protect these paths by wrapping them in double quotes, the POSIX shell (the command-line interface on Unix/Linux systems) still processes these injected expressions, giving attackers the same permissions as the user running the CLI.

NVD/CVE Database
06

CVE-2026-35020: Anthropic Claude Code CLI and Claude Agent SDK contain an OS command injection vulnerability in the command lookup helpe

security
Apr 6, 2026

Anthropic's Claude Code CLI and Claude Agent SDK have a vulnerability where attackers can run arbitrary commands by manipulating the TERMINAL environment variable (a setting that controls which terminal program to use). When the software constructs shell commands, it doesn't properly sanitize the TERMINAL variable, allowing attackers to inject shell metacharacters (special characters that have meaning to command interpreters) that get executed with the user's privileges.

NVD/CVE Database
07

CVE-2026-35050: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.1.1, users can save

security
Apr 6, 2026

text-generation-webui is an open-source web interface for running Large Language Models (AI systems that generate text). Before version 4.1.1, the application allowed users to save extension settings as Python files (code files that run on servers) in the main app directory, which could let attackers overwrite important Python files like 'download-model.py' and execute malicious code when users tried to download a new model.

Fix: This vulnerability is fixed in version 4.1.1.

NVD/CVE Database
08

GHSA-cjg8-h5qc-hrjv: kedro-datasets has a path traversal vulnerability in PartitionedDataset that allows arbitrary file write

security
Apr 6, 2026

PartitionedDataset in kedro-datasets had a path traversal vulnerability (a security flaw where an attacker uses ".." sequences to access files outside an intended directory) that allowed attackers to write files anywhere on a system by including ".." in partition IDs (identifiers for data sections). This affected all users regardless of storage type, local or cloud-based.

Fix: Upgrade to kedro-datasets version 9.3.0 or later. The patch normalizes paths using `posixpath.normpath` and validates that resolved paths stay within the dataset base directory before use, raising a `DatasetError` if the path escapes. For users unable to upgrade, manually validate partition IDs to ensure they do not contain ".." path components before passing them to PartitionedDataset.

GitHub Advisory Database
09

The one piece of data that could actually shed light on your job and AI

policyindustry
Apr 6, 2026

Economists warn that current tools for predicting AI's impact on jobs are inadequate because they only measure "exposure" (whether AI could theoretically do a job's tasks), which doesn't account for whether employers will actually replace workers or increase productivity instead. Economist Alex Imas calls for collecting new data on how AI actually changes specific jobs and industries, since knowing a job is 28% exposed to AI tells us little about whether that job will disappear, be transformed, or become more productive.

MIT Technology Review
10

CVE-2026-34940: KubeAI is an AI inference operator for kubernetes. Prior to 0.23.2, the ollamaStartupProbeScript() function in internal/

security
Apr 6, 2026

KubeAI, a tool that runs AI models on Kubernetes (a system for managing containerized applications), has a vulnerability in versions before 0.23.2 where attackers can inject malicious shell commands (arbitrary code execution instructions) through Model resource creation. The flaw exists because the ollamaStartupProbeScript() function doesn't properly validate user input when building commands that run during startup checks.

Fix: Upgrade to version 0.23.2 or later, which fixes this vulnerability.

NVD/CVE Database
Prev1...8283848586...371Next