aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 107/371
VIEW ALL
01

CVE-2026-27893: vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to versio

security
Mar 26, 2026

vLLM (a tool that runs and serves large language models) has a vulnerability in versions 0.10.1 through 0.17.x where two model files ignore a user's security setting that disables remote code execution (the ability to run code from outside sources). This means attackers could run malicious code through model repositories even when the user explicitly turned off that capability.

Fix: Upgrade to version 0.18.0, which patches the issue.

NVD/CVE Database
02

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

security
Mar 26, 2026

F5 BIG-IP APM (a network access management tool) contains an unspecified vulnerability that allows attackers to achieve remote code execution (the ability to run commands on a system they don't own). This vulnerability is actively being exploited by real attackers in the wild, making it an urgent security concern.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Check for signs of potential compromise on all internet accessible F5 products affected by this vulnerability. Consult F5's official guidelines and the referenced knowledge base articles at https://my.f5.com/manage/s/article/K000156741, https://my.f5.com/manage/s/article/K000160486, and https://my.f5.com/manage/s/article/K11438344 to assess exposure and mitigate risks.

CISA Known Exploited Vulnerabilities
03

Anthropic wins preliminary injunction in DOD fight as judge cites 'First Amendment retaliation'

policy
Mar 26, 2026

A federal judge granted Anthropic a preliminary injunction, blocking the Trump administration's ban on federal agencies using the company's Claude AI models and its Pentagon blacklisting as a supply chain risk (a designation claiming use of a company's technology threatens national security). The judge ruled the administration's actions constituted First Amendment retaliation for Anthropic publicly disagreeing with the government's contracting decisions, though a final verdict in the case could take months.

CNBC Technology
04

Federal judge sides with Anthropic in first round of standoff with Pentagon

policy
Mar 26, 2026

Anthropic won a temporary legal victory when a federal judge ordered a pause on the Department of Defense's punishment of the company, which had refused to let the military use its Claude AI model in autonomous weapons systems (systems that can make attack decisions without human control). Anthropic claimed the government violated its free speech rights by declaring it a supply chain risk (a company whose products could be exploited to harm national security) and blocking agencies from using its technology.

The Guardian Technology
05

Preparing for agentic AI: A financial services approach

securitypolicy
Mar 26, 2026

Financial institutions deploying agentic AI (autonomous AI systems that make decisions and take actions independently) must add AI-specific security controls beyond traditional frameworks like ISO 27001 and NIST, because these systems' autonomous nature and non-deterministic behavior introduce unique risks. The source recommends two critical capabilities: comprehensive observability (clear visibility into what AI agents do and why) and fine-grained access controls (limiting what tools and actions each agent can use), supported by seven design principles including human-AI security homology (applying human oversight rules to AI agents) and modular agent workflow architecture.

Fix: The source provides design principles and implementation guidance rather than explicit patches or updates. It recommends: (1) implementing agent identities with role and attribute-based permissions; (2) adding logging and behavioral monitoring; (3) requiring supervision for critical actions; (4) defining agent scope in workflows; (5) applying segregation of agent duties; (6) using maker-checker verification (where one agent proposes an action and another verifies it); and (7) implementing change and incident management. The source also advises to 'consult with your compliance and legal teams to determine specific requirements for your situation' and notes that 'regulatory requirements establish minimum baselines, but organizational risk considerations often require additional controls.'

AWS Security Blog
06

GHSA-7xr2-q9vf-x4r5: OpenClaw: Symlink Traversal via IDENTITY.md appendFile in agents.create/update (Incomplete Fix for CVE-2026-32013)

security
Mar 26, 2026

OpenClaw has a symlink traversal vulnerability (symlink: a file that points to another file) in two API handlers (`agents.create` and `agents.update`) that use `fs.appendFile` to write to an `IDENTITY.md` file without checking if it's a symlink. An attacker can place a symlink in the agent workspace pointing to a sensitive system file (like `/etc/crontab`), and when these handlers run, they will append attacker-controlled content to that sensitive file, potentially allowing remote code execution. This is an incomplete fix for CVE-2026-32013, which only patched two other handlers but missed these two.

GitHub Advisory Database
07

Google is making it easier to import another AI’s memory into Gemini

industry
Mar 26, 2026

Google Gemini is adding new features that let users transfer their chat history and memory from other AI assistants into Gemini. The "Import Memory" tool works by copying a prompt from Gemini into your previous AI, then pasting the response back into Gemini, while "Import Chat History" lets you export all your past conversations from another AI and upload them to Gemini.

The Verge (AI)
08

Apple will reportedly allow other AI chatbots to plug into Siri

industry
Mar 26, 2026

Apple's upcoming iOS 27 update will let users choose which AI chatbot to connect with Siri (Apple's voice assistant), including options like Google's Gemini or Anthropic's Claude downloaded from the App Store. The new feature, called "Extensions," will allow users to enable or disable different chatbots across iPhones, iPads, and Macs, expanding beyond the current ChatGPT integration.

The Verge (AI)
09

CVE-2026-33623: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.8.4` contai

security
Mar 26, 2026

PinchTab v0.8.4, a tool that lets AI agents control Chrome browsers through an HTTP server, has a command injection vulnerability on Windows where attackers can run arbitrary PowerShell commands if they have administrative access to the server's API. The vulnerability exists because the cleanup routine doesn't properly escape PowerShell metacharacters (special characters that PowerShell interprets as commands) when building cleanup commands from profile names.

Fix: Version 0.8.5 contains a patch for the issue.

NVD/CVE Database
10

CVE-2026-33622: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.8.3` throug

security
Mar 26, 2026

PinchTab is an HTTP server that allows AI agents to control a Chrome browser, but versions 0.8.3 through 0.8.5 have a security flaw where two endpoints (POST /wait and POST /tabs/{id}/wait) can execute arbitrary JavaScript (run code of an attacker's choice in the browser) even when JavaScript evaluation is disabled by the operator. Unlike the properly protected POST /evaluate endpoint, these vulnerable endpoints don't check the security policy before running user-provided code, though an attacker still needs valid authentication credentials to exploit it.

Fix: The source states that 'the current worktree fixes this by applying the same policy boundary to `fn` mode in `/wait` that already exists on `/evaluate`, while preserving the non-code wait modes.' However, the source explicitly notes 'as of time of publication, a patched version is not yet available.'

NVD/CVE Database
Prev1...105106107108109...371Next