aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 198/371
VIEW ALL
01

GHSA-w235-x559-36mg: OpenClaw: Docker container escape via unvalidated bind mount config injection

security
Feb 18, 2026

OpenClaw, a Docker sandbox tool, has a configuration injection vulnerability that could let attackers escape the container (a sandboxed computing environment) or access sensitive host data by injecting dangerous Docker options like bind mounts (attaching host directories into the container) or disabling security profiles. The issue affects versions 2026.2.14 and earlier.

Fix: Upgrade to OpenClaw version 2026.2.15 or later. The fix includes runtime enforcement when building Docker arguments, validation of dangerous settings like `network=host` and `unconfined` security profiles, and security audits to detect dangerous sandbox Docker configurations.

GitHub Advisory Database
02

GHSA-2qj5-gwg2-xwc4: OpenClaw: Unsanitized CWD path injection into LLM prompts

security
Feb 18, 2026

OpenClaw, an AI agent tool, had a vulnerability where the current working directory (the folder path where the software is running) was inserted into the AI's instructions without cleaning it first. An attacker could use special characters in folder names, like line breaks or hidden Unicode characters, to break the instruction structure and inject malicious commands, potentially causing the AI to misuse its tools or leak sensitive information.

Fix: Update to OpenClaw version 2026.2.15 or later. The fix sanitizes the workspace path by stripping Unicode control/format characters and explicit line/paragraph separators before embedding it into any LLM prompt output, and applies the same sanitization during workspace path resolution as an additional defensive measure.

GitHub Advisory Database
03

GHSA-5mx2-w598-339m: RediSearch Query Injection in @langchain/langgraph-checkpoint-redis

security
Feb 18, 2026

A query injection vulnerability exists in the `@langchain/langgraph-checkpoint-redis` package, where user-provided filter values are not properly escaped when constructing RediSearch queries (a search system built on Redis). Attackers can inject RediSearch syntax characters (like the OR operator `|`) into filter values to bypass thread isolation controls and access checkpoint data from other users or threads they shouldn't be able to see.

Fix: The 1.0.2 patch introduces an `escapeRediSearchTagValue()` function that properly escapes all RediSearch special characters (- . < > { } [ ] " ' : ; ! @ # $ % ^ & * ( ) + = ~ | \ ? /) by prefixing them with backslashes, and applies this escaping to all filter keys used in query construction.

GitHub Advisory Database
04

Tech firms must remove ‘revenge porn’ in 48 hours or risk being blocked, says Starmer

policysafety
Feb 18, 2026

The UK government plans to require technology companies to remove deepfake nudes and revenge porn (nonconsensual intimate images) within 48 hours of being flagged, or face fines up to 10% of their revenue or being blocked in the UK. Ofcom (the UK media regulator) will enforce these rules, and victims can report images directly to companies or to Ofcom, which will alert multiple platforms at once. The government will also explore using digital watermarks to automatically detect and flag reposted nonconsensual images, and create new guidance for internet providers to block sites that host such content.

Fix: Companies will be legally required to remove nonconsensual intimate images no more than 48 hours after being flagged. Ofcom will explore ways to add digital watermarks to flagged images to allow automatic detection when reposted. Victims can report images either directly to tech firms or to Ofcom (which will trigger alerts across multiple platforms). Internet providers will receive new guidance on blocking hosting for sites specializing in nonconsensual real or AI-generated explicit content. Platforms already use hash matching (a process that assigns videos a unique digital signature) for child sexual abuse content, and this same technology could be applied to nonconsensual intimate imagery.

The Guardian Technology
05

Scam Abuses Gemini Chatbots to Convince People to Buy Fake Crypto

securitysafety
Feb 18, 2026

Scammers created a fake cryptocurrency presale website for a non-existent "Google Coin" that uses an AI chatbot (similar to Google's Gemini) to persuade visitors to buy the fake digital currency, with payments going directly to the attackers. The chatbot makes a convincing sales pitch to trick people into sending money to the scammers.

Dark Reading
06

CVE-2025-12343: A flaw was found in FFmpeg’s TensorFlow backend within the libavfilter/dnn_backend_tf.c source file. The issue occurs in

security
Feb 18, 2026

FFmpeg's TensorFlow backend has a bug where a task object gets freed twice in certain error situations, causing a double-free condition (a memory safety error where the same memory is released multiple times). This can crash FFmpeg or programs using it when processing TensorFlow-based DNN models (deep neural network models), resulting in a denial-of-service attack, but it does not allow attackers to run arbitrary code.

NVD/CVE Database
07

AI platforms can be abused for stealthy malware communication

securitysafety
Feb 18, 2026

Researchers at Check Point discovered that AI assistants with web browsing abilities, like Grok and Microsoft Copilot, can be abused as hidden communication relays for malware. Attackers can instruct these AI services to fetch attacker-controlled URLs and relay commands back to malware, creating a stealthy two-way communication channel (C2, or command-and-control) that bypasses normal security detection because the AI platforms are trusted by security tools. The proof-of-concept attack works without requiring API keys or accounts, making it harder for defenders to block.

BleepingComputer
08

v0.14.15

security
Feb 18, 2026

This is a release notes document for LlamaIndex version 0.14.15 (dated February 18, 2026) containing updates across multiple components, including new multimodal (support for different types of content like text and images) features, support for additional AI models like Claude Sonnet 4.6, and various bug fixes across integrations with services like GitHub, SharePoint, and vector stores (databases that store data as numerical representations for AI searching).

LlamaIndex Security Releases
09

Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

policy
Feb 18, 2026

Anthropic, an AI company with a $200 million Department of Defense contract, is in a dispute with the Pentagon over how its AI models can be used. Anthropic wants guarantees that its models won't be used for autonomous weapons (weapons that make decisions without human control) or mass surveillance of Americans, while the DOD wants unrestricted use for all lawful purposes. The disagreement has put their working relationship under review, and if Anthropic doesn't comply with the DOD's terms, it could be labeled a supply chain risk (a designation that would require other contractors to avoid using its products).

CNBC Technology
10

GHSA-x22m-j5qq-j49m: OpenClaw has two SSRF via sendMediaFeishu and markdown image fetching in Feishu extension

security
Feb 18, 2026

The Feishu extension in OpenClaw had two SSRF vulnerabilities (SSRF is server-side request forgery, where an attacker tricks a server into making requests to internal systems it shouldn't access) that allowed attackers to fetch attacker-controlled URLs without protection. An attacker who could influence tool calls, including through prompt injection (tricking an AI by hiding instructions in its input), could potentially access internal services and re-upload responses as media.

Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix routes Feishu remote media fetching through hardened runtime helpers that enforce SSRF policies and size limits.

GitHub Advisory Database
Prev1...196197198199200...371Next