aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
41
[LAST_7D]
179
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 103/273
VIEW ALL
01

GHSA-x22m-j5qq-j49m: OpenClaw has two SSRF via sendMediaFeishu and markdown image fetching in Feishu extension

security
Feb 18, 2026

The Feishu extension in OpenClaw had two SSRF vulnerabilities (SSRF is server-side request forgery, where an attacker tricks a server into making requests to internal systems it shouldn't access) that allowed attackers to fetch attacker-controlled URLs without protection. An attacker who could influence tool calls, including through prompt injection (tricking an AI by hiding instructions in its input), could potentially access internal services and re-upload responses as media.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix routes Feishu remote media fetching through hardened runtime helpers that enforce SSRF policies and size limits.

GitHub Advisory Database
02

GHSA-7rcp-mxpq-72pj: OpenClaw Chutes manual OAuth state validation bypass can cause credential substitution

security
Feb 18, 2026

OpenClaw's manual OAuth login flow (a way to securely connect accounts using a third-party service) had a vulnerability where it didn't properly validate a security token called 'state', which could allow attackers to trick users into logging in with the wrong account. The automatic login flow was not affected by this issue.

Fix: The manual flow now requires the full redirect URL (must include both the authorization code and state parameter), validates the returned state against the expected value, and rejects code-only pastes. This fix is available in openclaw version 2026.2.14 and later (commit a99ad11a4107ba8eac58f54a3c1a8a0cf5686f47).

GitHub Advisory Database
03

GHSA-4564-pvr2-qq4h: OpenClaw: Prevent shell injection in macOS keychain credential write

security
Feb 18, 2026

The Claude CLI tool on macOS had a shell injection vulnerability (a security flaw where attackers can run arbitrary commands) in how it stored authentication tokens in the system keychain. The problem occurred because user-controlled OAuth tokens were directly inserted into shell commands without proper protection, allowing an attacker to break out of the intended command and execute malicious code.

Fix: Update to version 2026.2.14 or later. The fix avoids invoking a shell by using `execFileSync("security", argv)` and passing the updated keychain payload as a literal argument instead of constructing a shell command string.

GitHub Advisory Database
04

GHSA-xwjm-j929-xq7c: OpenClaw has a Path Traversal in Browser Download Functionality

security
Feb 18, 2026

OpenClaw, a browser download tool, had a path traversal vulnerability (a security flaw where an attacker could use special characters like `../` to write files outside the intended folder) in its download feature because it didn't validate the output file path. This vulnerability only affected users with authenticated access to the CLI or gateway RPC token (a special permission token), not regular AI agent users.

Fix: Upgrade to `openclaw` version 2026.2.13 or later. The fix restricts the `path` parameter to the default download directory using `resolvePathWithinRoot` in the gateway browser control routes `/wait/download` and `/download`.

GitHub Advisory Database
05

Google DeepMind wants to know if chatbots are just virtue signaling

researchsafety
Feb 18, 2026

Researchers at Google DeepMind are investigating whether chatbots display genuine moral reasoning or are simply mimicking responses (virtue signaling). While studies show that large language models (LLMs, AI systems trained on massive amounts of text data) can give morally sound advice, the models are unreliable in practice because they often flip their answers when questioned, change responses based on how questions are formatted, and show sensitivity to tiny changes like swapping option labels from 'Case 1' to '(A)'. The researchers propose developing more rigorous evaluation methods to test whether moral behavior in LLMs is actually robust or just performative.

Fix: The source proposes a new line of research to develop more rigorous techniques for evaluating moral competence in LLMs. This would include tests designed to push models to change their responses to moral questions to reveal if they lack robust moral reasoning, and tests presenting models with variations of common moral problems to check whether they produce rote responses or more nuanced ones. However, the source notes this is "more a wish list than a set of ready-made solutions" and does not describe implemented fixes or updates.

MIT Technology Review
06

Google’s AI music maker is coming to the Gemini app

industry
Feb 18, 2026

Google has added Lyria 3, an AI music generation model from DeepMind, to its Gemini chatbot app, allowing users to create 30-second music tracks by describing genres, moods, or providing images and videos as input. The feature is now available in beta across multiple languages globally to users aged 18 and older.

The Verge (AI)
07

Google adds music-generation capabilities to the Gemini app

industry
Feb 18, 2026

Google has added music generation to its Gemini app using DeepMind's Lyria 3 model, which lets users create 30-second songs by describing what they want. The feature includes safeguards like SynthID watermarks (digital markers that identify AI-generated content) and filters to prevent mimicking existing artists, plus the ability for users to upload tracks and ask Gemini whether they are AI-generated.

Fix: Google has implemented SynthID watermarks to identify AI-generated music and added filters to check outputs against existing content to prevent artist mimicry. The company is also adding capabilities within Gemini to identify AI-generated music, allowing users to upload tracks and ask if they are AI-generated.

TechCrunch
08

Kana emerges from stealth with $15M to build flexible AI agents for marketers

industry
Feb 18, 2026

Kana, a new marketing AI startup, has raised $15 million to build AI agents (software systems that can independently perform tasks) that help marketers with data analysis, campaign management, and audience targeting. The platform uses "loosely coupled" agents (modular AI components that work independently but can be connected together) that can be customized in real time and integrated into existing marketing software, while keeping humans involved to approve and adjust the AI's actions.

TechCrunch
09

Microsoft says Office bug exposed customers’ confidential emails to Copilot AI

securityprivacy
Feb 18, 2026

Microsoft discovered a bug that allowed Copilot (an AI chat feature in Office software) to read and summarize customers' confidential emails without permission for several weeks, even when data loss prevention policies (rules meant to block sensitive information from being sent to AI systems) were in place. The bug affected emails labeled as confidential and was tracked internally as CW1226324.

Fix: Microsoft said it began rolling out a fix for the bug earlier in February.

TechCrunch (Security)
10

OpenAI pushes into higher education as India seeks to scale AI skills

industry
Feb 18, 2026

OpenAI is partnering with six major Indian universities and academic institutions to integrate AI tools like ChatGPT into teaching and research, aiming to reach over 100,000 students, faculty, and staff within a year. The initiative focuses on embedding AI into core academic functions such as coding and research rather than just providing standalone tool access, and includes faculty training and responsible-use frameworks. This move reflects broader competition among AI companies to shape how AI is taught and adopted in India, one of the world's largest education systems and ChatGPT's second-largest user base after the U.S.

TechCrunch
Prev1...101102103104105...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026