aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
41
[LAST_7D]
179
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 102/273
VIEW ALL
01

GHSA-chf7-jq6g-qrwv: OpenClaw: Telegram bot token exposure via logs

security
Feb 18, 2026

OpenClaw, an npm package, had a vulnerability where Telegram bot tokens (the credentials used to access Telegram's bot API) could leak into logs and error messages because the package didn't hide them when logging. An attacker who obtained a leaked token could impersonate the bot and take control of its API access.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: Upgrade to openclaw >= 2026.2.15 when released. Additionally, rotate the Telegram bot token if it may have been exposed.

GitHub Advisory Database
02

GHSA-w235-x559-36mg: OpenClaw: Docker container escape via unvalidated bind mount config injection

security
Feb 18, 2026

OpenClaw, a Docker sandbox tool, has a configuration injection vulnerability that could let attackers escape the container (a sandboxed computing environment) or access sensitive host data by injecting dangerous Docker options like bind mounts (attaching host directories into the container) or disabling security profiles. The issue affects versions 2026.2.14 and earlier.

Fix: Upgrade to OpenClaw version 2026.2.15 or later. The fix includes runtime enforcement when building Docker arguments, validation of dangerous settings like `network=host` and `unconfined` security profiles, and security audits to detect dangerous sandbox Docker configurations.

GitHub Advisory Database
03

GHSA-2qj5-gwg2-xwc4: OpenClaw: Unsanitized CWD path injection into LLM prompts

security
Feb 18, 2026

OpenClaw, an AI agent tool, had a vulnerability where the current working directory (the folder path where the software is running) was inserted into the AI's instructions without cleaning it first. An attacker could use special characters in folder names, like line breaks or hidden Unicode characters, to break the instruction structure and inject malicious commands, potentially causing the AI to misuse its tools or leak sensitive information.

Fix: Update to OpenClaw version 2026.2.15 or later. The fix sanitizes the workspace path by stripping Unicode control/format characters and explicit line/paragraph separators before embedding it into any LLM prompt output, and applies the same sanitization during workspace path resolution as an additional defensive measure.

GitHub Advisory Database
04

GHSA-5mx2-w598-339m: RediSearch Query Injection in @langchain/langgraph-checkpoint-redis

security
Feb 18, 2026

A query injection vulnerability exists in the `@langchain/langgraph-checkpoint-redis` package, where user-provided filter values are not properly escaped when constructing RediSearch queries (a search system built on Redis). Attackers can inject RediSearch syntax characters (like the OR operator `|`) into filter values to bypass thread isolation controls and access checkpoint data from other users or threads they shouldn't be able to see.

Fix: The 1.0.2 patch introduces an `escapeRediSearchTagValue()` function that properly escapes all RediSearch special characters (- . < > { } [ ] " ' : ; ! @ # $ % ^ & * ( ) + = ~ | \ ? /) by prefixing them with backslashes, and applies this escaping to all filter keys used in query construction.

GitHub Advisory Database
05

Tech firms must remove ‘revenge porn’ in 48 hours or risk being blocked, says Starmer

policysafety
Feb 18, 2026

The UK government plans to require technology companies to remove deepfake nudes and revenge porn (nonconsensual intimate images) within 48 hours of being flagged, or face fines up to 10% of their revenue or being blocked in the UK. Ofcom (the UK media regulator) will enforce these rules, and victims can report images directly to companies or to Ofcom, which will alert multiple platforms at once. The government will also explore using digital watermarks to automatically detect and flag reposted nonconsensual images, and create new guidance for internet providers to block sites that host such content.

Fix: Companies will be legally required to remove nonconsensual intimate images no more than 48 hours after being flagged. Ofcom will explore ways to add digital watermarks to flagged images to allow automatic detection when reposted. Victims can report images either directly to tech firms or to Ofcom (which will trigger alerts across multiple platforms). Internet providers will receive new guidance on blocking hosting for sites specializing in nonconsensual real or AI-generated explicit content. Platforms already use hash matching (a process that assigns videos a unique digital signature) for child sexual abuse content, and this same technology could be applied to nonconsensual intimate imagery.

The Guardian Technology
06

Scam Abuses Gemini Chatbots to Convince People to Buy Fake Crypto

securitysafety
Feb 18, 2026

Scammers created a fake cryptocurrency presale website for a non-existent "Google Coin" that uses an AI chatbot (similar to Google's Gemini) to persuade visitors to buy the fake digital currency, with payments going directly to the attackers. The chatbot makes a convincing sales pitch to trick people into sending money to the scammers.

Dark Reading
07

CVE-2025-12343: A flaw was found in FFmpeg’s TensorFlow backend within the libavfilter/dnn_backend_tf.c source file. The issue occurs in

security
Feb 18, 2026

FFmpeg's TensorFlow backend has a bug where a task object gets freed twice in certain error situations, causing a double-free condition (a memory safety error where the same memory is released multiple times). This can crash FFmpeg or programs using it when processing TensorFlow-based DNN models (deep neural network models), resulting in a denial-of-service attack, but it does not allow attackers to run arbitrary code.

NVD/CVE Database
08

AI platforms can be abused for stealthy malware communication

securitysafety
Feb 18, 2026

Researchers at Check Point discovered that AI assistants with web browsing abilities, like Grok and Microsoft Copilot, can be abused as hidden communication relays for malware. Attackers can instruct these AI services to fetch attacker-controlled URLs and relay commands back to malware, creating a stealthy two-way communication channel (C2, or command-and-control) that bypasses normal security detection because the AI platforms are trusted by security tools. The proof-of-concept attack works without requiring API keys or accounts, making it harder for defenders to block.

BleepingComputer
09

v0.14.15

security
Feb 18, 2026

This is a release notes document for LlamaIndex version 0.14.15 (dated February 18, 2026) containing updates across multiple components, including new multimodal (support for different types of content like text and images) features, support for additional AI models like Claude Sonnet 4.6, and various bug fixes across integrations with services like GitHub, SharePoint, and vector stores (databases that store data as numerical representations for AI searching).

LlamaIndex Security Releases
10

Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

policy
Feb 18, 2026

Anthropic, an AI company with a $200 million Department of Defense contract, is in a dispute with the Pentagon over how its AI models can be used. Anthropic wants guarantees that its models won't be used for autonomous weapons (weapons that make decisions without human control) or mass surveillance of Americans, while the DOD wants unrestricted use for all lawful purposes. The disagreement has put their working relationship under review, and if Anthropic doesn't comply with the DOD's terms, it could be labeled a supply chain risk (a designation that would require other contractors to avoid using its products).

CNBC Technology
Prev1...100101102103104...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026