aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3162 items

Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

highnews
security
Feb 20, 2026

Cline CLI version 2.3.0 was compromised in a supply chain attack (an attack on software before it reaches users) where an unauthorized party used a stolen npm publish token to add a postinstall script that automatically installed OpenClaw, an AI agent tool, on developer machines. The attack affected about 4,000 downloads over an eight-hour window on February 17, 2026, though the impact was considered low since OpenClaw itself is not malicious.

Fix: Cline maintainers released version 2.4.0 to fix the issue. Version 2.3.0 has been deprecated, the compromised token has been revoked, and the npm publishing mechanism was updated to support OpenID Connect (OIDC, a secure authentication standard) via GitHub Actions. Users are advised to update to the latest version, check their systems for unexpected OpenClaw installations, and remove it if not needed.

The Hacker News

OpenAI says 18 to 24-year-olds account for nearly 50% of ChatGPT usage in India

infonews
industry
Feb 20, 2026

OpenAI reports that users aged 18 to 24 make up nearly 50% of ChatGPT messages in India, with young Indians using the platform primarily for work tasks. Indian users particularly favor Codex (OpenAI's coding assistant), using it three times more than the global average, suggesting strong demand for AI tools that help with software development.

The OpenAI mafia: 18 startups founded by alumni

infonews
industry
Feb 20, 2026

OpenAI employees have founded at least 18 startups after leaving the company, creating what some call the 'OpenAI mafia' in Silicon Valley. Notable alumni-founded companies include Anthropic (a major rival that recently raised $30 billion), Adept AI Labs, Cresta, and Covariant, with some startups reaching billion-dollar valuations despite not yet launching products.

Urgent research needed to tackle AI threats, says Google AI boss

infonews
policysafety

PromptSpy Android Malware Abuses Gemini AI at Runtime for Persistence

mediumnews
securitysafety

10 Passwordless-Optionen für Unternehmen

infonews
security
Feb 19, 2026

This article discusses passwordless authentication, an alternative to traditional passwords that uses standards like FIDO2 and Passkeys (cryptographic keys stored on devices instead of passwords) to improve security and reduce administrative burden. The article explains that the FIDO Alliance manages these standards and lists ten commercial passwordless solutions from vendors like AuthID, Axiad, Beyond Identity, and CyberArk that offer features such as biometric authentication, risk-based evaluation of login attempts, and integration with existing identity management systems.

Nvidia is in talks to invest up to $30 billion in OpenAI, source says

infonews
industry
Feb 19, 2026

Nvidia is in talks to invest up to $30 billion in OpenAI as part of a funding round that could value the AI startup at $730 billion, separate from a previously announced $100 billion infrastructure agreement. This new investment is not tied to any specific deployment milestones, and the deal is still under negotiation with details subject to change.

Google’s new Gemini Pro model has record benchmark scores — again

infonews
industry
Feb 19, 2026

Google released Gemini Pro 3.1, a new large language model (LLM, an AI trained on vast amounts of text to understand and generate language), which achieved record scores on independent performance benchmarks like Humanity's Last Exam and APEX-Agents. The model is currently in preview and represents a major improvement over the previous Gemini 3 version, particularly for agentic work (tasks where the AI breaks down complex problems into multiple steps and executes them).

EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects

infonews
policysafety

CVE-2025-49113: RoundCube Webmail Deserialization of Untrusted Data Vulnerability

infovulnerability
security
Feb 19, 2026
CVE-2025-49113EPSS: 90.4%🔥 Actively Exploited

CVE-2025-68461: RoundCube Webmail Cross-site Scripting Vulnerability

infovulnerability
security
Feb 19, 2026
CVE-2025-68461🔥 Actively Exploited

CVE-2026-26320: OpenClaw is a personal AI assistant. OpenClaw macOS desktop client registers the `openclaw://` URL scheme. For `openclaw

highvulnerability
security
Feb 19, 2026
CVE-2026-26320

OpenClaw is a personal AI assistant with a macOS desktop client that can be triggered through deep links (special URLs that open apps). In versions 2026.2.6 through 2026.2.13, attackers could hide malicious commands by padding messages with whitespace, so users would see only a harmless preview but the full hidden command would execute when they clicked 'Run'. This works because the app only displayed the first 240 characters in the confirmation dialog before executing the entire message.

PromptSpy is the first known Android malware to use generative AI at runtime

mediumnews
securitysafety

US dominance of agentic AI at the heart of new NIST initiative

infonews
policysafety

CVE-2026-26286: SillyTavern is a locally installed user interface that allows users to interact with text generation large language mode

highvulnerability
security
Feb 19, 2026
CVE-2026-26286

SillyTavern is a locally installed interface for interacting with text generation AI models and other AI tools. Versions before 1.16.0 had an SSRF vulnerability (server-side request forgery, where an attacker can make the server send requests to internal networks or services it shouldn't access), allowing authenticated users to read responses from internal services and private network resources through the asset download feature.

YouTube’s latest experiment brings its conversational AI tool to TVs

infonews
industry
Feb 19, 2026

YouTube is expanding its conversational AI tool to smart TVs, gaming consoles, and streaming devices, allowing users to ask questions about video content using an 'Ask' button or voice commands without pausing playback. The feature, currently available to select users over 18 in five languages, lets viewers get instant answers about things like recipe ingredients or song background information. This expansion reflects YouTube's growing dominance in TV viewing, with competitors like Amazon, Roku, and Netflix also developing their own conversational AI features for television.

GHSA-fh3f-q9qw-93j9: OpenClaw replaced a deprecated sandbox hash algorithm

mediumvulnerability
security
Feb 19, 2026

OpenClaw, an npm package, used SHA-1 (an outdated hashing algorithm with known weaknesses) to create identifiers for Docker and browser sandbox configurations. An attacker could exploit hash collisions (two different configurations producing the same hash) to trick the system into reusing the wrong sandbox, leading to cache poisoning (corrupting stored data) and unsafe sandbox reuse.

GHSA-xjw9-4gw8-4rqx: Microsoft Semantic Kernel InMemoryVectorStore filter functionality vulnerable to remote code execution

criticalvulnerability
security
Feb 19, 2026
CVE-2026-26030

Microsoft's Semantic Kernel Python SDK has an RCE vulnerability (remote code execution, where an attacker can run commands on a system they don't own) in the `InMemoryVectorStore` filter functionality, which allows attackers to execute arbitrary code. The vulnerability affects the library used for building AI applications with vector storage (a database that stores AI embeddings, which are numerical representations of data).

The AI security nightmare is here and it looks suspiciously like lobster

mediumnews
securitysafety

All the important news from the ongoing India AI Impact Summit

infonews
industry
Feb 19, 2026

India is hosting a major AI Impact Summit attracting executives from major AI companies and tech firms to drive investment and innovation in the country. The event showcases significant AI development activity, including new investments in Indian AI startups, partnerships between international AI companies and Indian firms, and announcements of local AI infrastructure projects, while also highlighting concerns about AI's potential impact on traditional IT services jobs.

Previous45 / 159Next
TechCrunch
TechCrunch
Feb 20, 2026

Google DeepMind's leader Sir Demis Hassabis told the BBC that more research is urgently needed to address AI threats, particularly the risk of bad actors misusing the technology and losing control of increasingly powerful autonomous systems (software that makes decisions without human input). While tech leaders and most countries at the AI Impact Summit called for stronger global governance and "smart regulation" of AI, the US rejected this approach, arguing that excessive rules would slow progress.

BBC Technology
Feb 20, 2026

PromptSpy is Android malware that uses Google's Gemini AI chatbot to maintain persistence on infected devices by sending UI information to Gemini, which then instructs the malware where to tap or swipe to add itself to recent apps. The malware also abuses Accessibility Services (a system feature that allows apps to interact with the device interface) to prevent users from uninstalling it by overlaying invisible blocks over removal buttons.

Fix: According to ESET researchers, victims can remove PromptSpy by rebooting the device into Safe Mode, where third-party apps are disabled and can be uninstalled normally.

SecurityWeek
CSO Online
CNBC Technology
TechCrunch
Feb 19, 2026

The Electronic Frontier Foundation (EFF) introduced a policy for open-source contributions that requires developers to understand any code they submit and to write comments and documentation themselves, even if they use LLMs (large language models, AI systems trained to generate human-like text) to help. While the EFF does not completely ban LLM-assisted code, they require disclosure of LLM use because AI-generated code can contain hidden bugs that scale poorly and create extra work for reviewers, especially in under-resourced teams.

Fix: The source explicitly states that contributors must disclose when they use LLM tools. The EFF's policy requires that: (1) contributors understand the code they submit, and (2) comments and documentation be authored by a human rather than generated by an LLM. No technical patch, update, or automated mitigation is discussed in the source.

EFF Deeplinks Blog

RoundCube Webmail has a deserialization of untrusted data vulnerability (a flaw where the program unsafely processes data from users, which can be exploited to run malicious code) in its settings upload feature because a URL parameter called _from is not properly checked. This allows authenticated users (those who have logged in) to execute remote code execution (run commands on the server without owning it), and it is currently being exploited by attackers in real-world attacks.

Fix: Apply security updates to RoundCube Webmail version 1.6.11 or version 1.5.10, according to vendor instructions at https://roundcube.net/news/2025/06/01/security-updates-1.6.11-and-1.5.10. Alternatively, follow applicable BOD 22-01 guidance for cloud services or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities

RoundCube Webmail has a cross-site scripting vulnerability (XSS, a type of attack where malicious code is injected into a webpage to run in users' browsers) that can be triggered through the animate tag in SVG documents. This vulnerability is currently being actively exploited by attackers in the wild. Organizations using RoundCube Webmail need to take action by the March 13, 2026 deadline.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Security updates are available in versions 1.6.12 and 1.5.12 (see vendor release notes at https://roundcube.net/news/2025/12/13/security-updates-1.6.12-and-1.5.12).

CISA Known Exploited Vulnerabilities

Fix: The issue is fixed in version 2026.2.14. The source also mentions mitigations: do not approve unexpected 'Run OpenClaw agent?' prompts triggered while browsing untrusted websites, and use deep links only with a valid authentication key for trusted personal automations.

NVD/CVE Database
Feb 19, 2026

Researchers discovered PromptSpy, the first known Android malware that uses generative AI (specifically Google's Gemini model) during its operation to help it persist on infected devices by adapting how it locks itself in the Recent Apps list across different Android manufacturers. Beyond this AI feature, PromptSpy functions as spyware with a VNC module (remote access tool) that allows attackers to view and control the device, intercept passwords, record screens, and capture installed apps. The malware also uses invisible UI overlays to block users from uninstalling it or disabling its permissions.

Fix: According to ESET, victims must reboot into Android Safe Mode so that third-party apps are disabled and cannot block the malware's uninstall.

BleepingComputer
Feb 19, 2026

NIST announced the AI Agent Standards Initiative to develop standards and safeguards for agentic AI (autonomous AI systems that can perform tasks independently), with the goal of building public confidence and ensuring safe adoption. The initiative faces criticism for moving too slowly, as real-world security incidents involving agentic AI (like the EchoLeak vulnerability in Microsoft 365 Copilot and the OpenClaw agent that can let attackers access user data) are already occurring faster than standards can be developed.

CSO Online

Fix: The vulnerability has been patched in version 1.16.0 by introducing a whitelist domain check for asset download requests. It can be reviewed and customized by editing the `whitelistImportDomains` array in the `config.yaml` file.

NVD/CVE Database
TechCrunch

Fix: Update to version 2026.2.15 or later. The fix replaces SHA-1 with SHA-256 (a stronger hashing algorithm with better collision resistance) for generating these sandbox identifiers.

GitHub Advisory Database

Fix: Upgrade to python-1.39.4 or higher. As a temporary workaround, avoid using `InMemoryVectorStore` for production scenarios.

GitHub Advisory Database
Feb 19, 2026

A hacker exploited a vulnerability in Cline, an open-source AI coding agent, to trick it into installing OpenClaw (a viral AI agent that can perform autonomous actions) across many systems. The vulnerability allowed attackers to use prompt injection (hidden malicious instructions embedded in input) to make Claude, the AI powering Cline, execute unintended commands, highlighting growing security risks as more people deploy autonomous software.

The Verge (AI)
TechCrunch