aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 199/371
VIEW ALL
01

GHSA-7rcp-mxpq-72pj: OpenClaw Chutes manual OAuth state validation bypass can cause credential substitution

security
Feb 18, 2026

OpenClaw's manual OAuth login flow (a way to securely connect accounts using a third-party service) had a vulnerability where it didn't properly validate a security token called 'state', which could allow attackers to trick users into logging in with the wrong account. The automatic login flow was not affected by this issue.

Fix: The manual flow now requires the full redirect URL (must include both the authorization code and state parameter), validates the returned state against the expected value, and rejects code-only pastes. This fix is available in openclaw version 2026.2.14 and later (commit a99ad11a4107ba8eac58f54a3c1a8a0cf5686f47).

GitHub Advisory Database
02

GHSA-4564-pvr2-qq4h: OpenClaw: Prevent shell injection in macOS keychain credential write

security
Feb 18, 2026

The Claude CLI tool on macOS had a shell injection vulnerability (a security flaw where attackers can run arbitrary commands) in how it stored authentication tokens in the system keychain. The problem occurred because user-controlled OAuth tokens were directly inserted into shell commands without proper protection, allowing an attacker to break out of the intended command and execute malicious code.

Fix: Update to version 2026.2.14 or later. The fix avoids invoking a shell by using `execFileSync("security", argv)` and passing the updated keychain payload as a literal argument instead of constructing a shell command string.

GitHub Advisory Database
03

GHSA-xwjm-j929-xq7c: OpenClaw has a Path Traversal in Browser Download Functionality

security
Feb 18, 2026

OpenClaw, a browser download tool, had a path traversal vulnerability (a security flaw where an attacker could use special characters like `../` to write files outside the intended folder) in its download feature because it didn't validate the output file path. This vulnerability only affected users with authenticated access to the CLI or gateway RPC token (a special permission token), not regular AI agent users.

Fix: Upgrade to `openclaw` version 2026.2.13 or later. The fix restricts the `path` parameter to the default download directory using `resolvePathWithinRoot` in the gateway browser control routes `/wait/download` and `/download`.

GitHub Advisory Database
04

Google DeepMind wants to know if chatbots are just virtue signaling

researchsafety
Feb 18, 2026

Researchers at Google DeepMind are investigating whether chatbots display genuine moral reasoning or are simply mimicking responses (virtue signaling). While studies show that large language models (LLMs, AI systems trained on massive amounts of text data) can give morally sound advice, the models are unreliable in practice because they often flip their answers when questioned, change responses based on how questions are formatted, and show sensitivity to tiny changes like swapping option labels from 'Case 1' to '(A)'. The researchers propose developing more rigorous evaluation methods to test whether moral behavior in LLMs is actually robust or just performative.

Fix: The source proposes a new line of research to develop more rigorous techniques for evaluating moral competence in LLMs. This would include tests designed to push models to change their responses to moral questions to reveal if they lack robust moral reasoning, and tests presenting models with variations of common moral problems to check whether they produce rote responses or more nuanced ones. However, the source notes this is "more a wish list than a set of ready-made solutions" and does not describe implemented fixes or updates.

MIT Technology Review
05

Google’s AI music maker is coming to the Gemini app

industry
Feb 18, 2026

Google has added Lyria 3, an AI music generation model from DeepMind, to its Gemini chatbot app, allowing users to create 30-second music tracks by describing genres, moods, or providing images and videos as input. The feature is now available in beta across multiple languages globally to users aged 18 and older.

The Verge (AI)
06

Google adds music-generation capabilities to the Gemini app

industry
Feb 18, 2026

Google has added music generation to its Gemini app using DeepMind's Lyria 3 model, which lets users create 30-second songs by describing what they want. The feature includes safeguards like SynthID watermarks (digital markers that identify AI-generated content) and filters to prevent mimicking existing artists, plus the ability for users to upload tracks and ask Gemini whether they are AI-generated.

Fix: Google has implemented SynthID watermarks to identify AI-generated music and added filters to check outputs against existing content to prevent artist mimicry. The company is also adding capabilities within Gemini to identify AI-generated music, allowing users to upload tracks and ask if they are AI-generated.

TechCrunch
07

Kana emerges from stealth with $15M to build flexible AI agents for marketers

industry
Feb 18, 2026

Kana, a new marketing AI startup, has raised $15 million to build AI agents (software systems that can independently perform tasks) that help marketers with data analysis, campaign management, and audience targeting. The platform uses "loosely coupled" agents (modular AI components that work independently but can be connected together) that can be customized in real time and integrated into existing marketing software, while keeping humans involved to approve and adjust the AI's actions.

TechCrunch
08

Microsoft says Office bug exposed customers’ confidential emails to Copilot AI

securityprivacy
Feb 18, 2026

Microsoft discovered a bug that allowed Copilot (an AI chat feature in Office software) to read and summarize customers' confidential emails without permission for several weeks, even when data loss prevention policies (rules meant to block sensitive information from being sent to AI systems) were in place. The bug affected emails labeled as confidential and was tracked internally as CW1226324.

Fix: Microsoft said it began rolling out a fix for the bug earlier in February.

TechCrunch (Security)
09

OpenAI pushes into higher education as India seeks to scale AI skills

industry
Feb 18, 2026

OpenAI is partnering with six major Indian universities and academic institutions to integrate AI tools like ChatGPT into teaching and research, aiming to reach over 100,000 students, faculty, and staff within a year. The initiative focuses on embedding AI into core academic functions such as coding and research rather than just providing standalone tool access, and includes faculty training and responsible-use frameworks. This move reflects broader competition among AI companies to shape how AI is taught and adopted in India, one of the world's largest education systems and ChatGPT's second-largest user base after the U.S.

TechCrunch
10

CVE-2026-2654: A weakness has been identified in huggingface smolagents 1.24.0. Impacted is the function requests.get/requests.post of

security
Feb 18, 2026

A vulnerability called server-side request forgery (SSRF, where an attacker tricks a server into making unwanted web requests) was found in Hugging Face's smolagents version 1.24.0, specifically in the LocalPythonExecutor component's requests.get and requests.post functions. An attacker can exploit this remotely, and the vulnerability has been publicly disclosed, though the vendor did not respond when contacted.

NVD/CVE Database
Prev1...197198199200201...371Next