aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3162 items

Hackers can turn Grok, Copilot into covert command-and-control channels, researchers warn

mediumnews
security
Feb 19, 2026

Researchers have discovered that attackers can abuse web-based AI assistants like Grok and Microsoft Copilot to create command-and-control channels (hidden communication paths between malware and attackers), hiding malicious traffic within normal AI service traffic that organizations typically allow through their networks without inspection. This technique works because many companies grant unrestricted access to popular AI platforms by default, allowing malware to receive instructions through the AI assistants while remaining undetected.

Fix: Security leaders should apply governance discipline similar to high-risk SaaS (software-as-a-service, cloud-based software) platforms. Specifically, organizations should start by creating a comprehensive inventory of all AI tools in use and establishing a clear policy framework for approving and enabling them. The source text is incomplete but indicates that implementing AI-specific controls was being recommended; however, the full recommendation is cut off and not available in the provided content.

CSO Online

Dueling PACs take center stage in midterm elections over AI regulation

inforegulatory
policy
Feb 19, 2026

Political action committees (PACs, organizations that raise money to support political candidates) backed by AI companies are spending millions of dollars to influence elections on AI regulation policy. Jobs and Democracy PAC, supported by Anthropic, is running ads for candidates who favor stronger AI regulation like New York's RAISE Act (which requires large AI developers to publish safety protocols and report serious misuse), while competing PACs backed by venture capitalists and other AI companies are running ads against these candidates.

Chinese tech companies progress 'remarkable,' OpenAI's Altman tells CNBC

infonews
industry
Feb 19, 2026

OpenAI's Sam Altman told CNBC that Chinese tech companies are making "remarkable" progress in developing artificial general intelligence (AGI, where AI systems match human capabilities), with some companies approaching the technological frontier while others still lag behind. OpenAI is exploring new revenue streams, including advertising within ChatGPT, with plans to initially test ads in the U.S. before expanding to other markets. The company remains focused on rapid growth rather than immediate profitability.

CVE-2026-25338: Missing Authorization vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator by AYS ays-chatgpt-assistan

mediumvulnerability
security
Feb 19, 2026
CVE-2026-25338

CVE-2026-25338 is a missing authorization vulnerability in the Ays Pro AI ChatBot plugin (versions up to 2.7.4), meaning the software fails to properly check whether users have permission to access certain features. This security flaw allows attackers to exploit incorrectly configured access controls (the rules that decide who can do what in the software).

What it takes to make agentic AI work in retail

infonews
industry
Feb 19, 2026

This podcast discusses how a large US retail company uses agentic AI (AI systems that can take independent actions to complete tasks) across their software development process, including validating requirements, creating and reviewing test cases, and resolving issues faster. The organization emphasizes maintaining human oversight, strict governance rules, and measurable quality standards while deploying these AI agents.

Macron defends EU AI rules and vows crackdown on child ‘digital abuse’

infonews
policysafety

OpenAI taps Tata for 100MW AI data center capacity in India, eyes 1GW

infonews
industry
Feb 19, 2026

OpenAI has partnered with India's Tata Group to build AI data center capacity starting with 100 megawatts and scaling to 1 gigawatt, allowing OpenAI to run advanced models within India while meeting local data residency and compliance requirements. The partnership includes deploying ChatGPT Enterprise across Tata's workforce and using OpenAI's tools for AI-native software development. This expansion supports OpenAI's growth in India, where it has over 100 million weekly users, and helps enterprises that must process sensitive data locally.

OpenAI deepens India push with Pine Labs fintech partnership

infonews
industry
Feb 18, 2026

OpenAI has partnered with Pine Labs, an Indian fintech company, to integrate OpenAI's APIs (application programming interfaces, which are software tools that let companies connect AI into their existing systems) into Pine Labs' payments and commerce platform. The partnership aims to automate financial workflows like settlement, invoicing, and reconciliation, with Pine Labs already using AI internally to reduce daily settlement processing from hours to minutes. OpenAI is expanding its presence in India beyond ChatGPT by embedding its technology into enterprise and infrastructure systems across the country's large developer base.

GHSA-xxvh-5hwj-42pp: OpenClaw's sandbox config hash sorted primitive arrays and suppressed needed container recreation

mediumvulnerability
security
Feb 18, 2026
CVE-2026-27007

OpenClaw's sandbox configuration had a bug where the `normalizeForHash` function (a process that converts configuration settings into a unique identifier) was sorting arrays containing simple values, causing different array orders to produce identical hashes. This meant that sandbox containers (isolated software environments) weren't being recreated when only the order of configuration settings like DNS or file bindings changed, potentially leaving stale containers in use.

GHSA-6hf3-mhgc-cm65: OpenClaw session tool visibility hardening and Telegram webhook secret fallback

mediumvulnerability
security
Feb 18, 2026
CVE-2026-27004

OpenClaw, a session management tool, had a visibility issue in shared multi-user environments where session tools (like `sessions_list` and `sessions_history`) could give users access to other people's session data when they shouldn't have it. Additionally, Telegram webhook mode didn't properly use account-level secret settings as a fallback. The risk is mainly in environments where multiple people share the same agent and don't fully trust each other.

GHSA-chf7-jq6g-qrwv: OpenClaw: Telegram bot token exposure via logs

mediumvulnerability
security
Feb 18, 2026
CVE-2026-27003

OpenClaw, an npm package, had a vulnerability where Telegram bot tokens (the credentials used to access Telegram's bot API) could leak into logs and error messages because the package didn't hide them when logging. An attacker who obtained a leaked token could impersonate the bot and take control of its API access.

GHSA-w235-x559-36mg: OpenClaw: Docker container escape via unvalidated bind mount config injection

highvulnerability
security
Feb 18, 2026
CVE-2026-27002

OpenClaw, a Docker sandbox tool, has a configuration injection vulnerability that could let attackers escape the container (a sandboxed computing environment) or access sensitive host data by injecting dangerous Docker options like bind mounts (attaching host directories into the container) or disabling security profiles. The issue affects versions 2026.2.14 and earlier.

GHSA-2qj5-gwg2-xwc4: OpenClaw: Unsanitized CWD path injection into LLM prompts

highvulnerability
security
Feb 18, 2026
CVE-2026-27001

OpenClaw, an AI agent tool, had a vulnerability where the current working directory (the folder path where the software is running) was inserted into the AI's instructions without cleaning it first. An attacker could use special characters in folder names, like line breaks or hidden Unicode characters, to break the instruction structure and inject malicious commands, potentially causing the AI to misuse its tools or leak sensitive information.

GHSA-5mx2-w598-339m: RediSearch Query Injection in @langchain/langgraph-checkpoint-redis

mediumvulnerability
security
Feb 18, 2026
CVE-2026-27022

A query injection vulnerability exists in the `@langchain/langgraph-checkpoint-redis` package, where user-provided filter values are not properly escaped when constructing RediSearch queries (a search system built on Redis). Attackers can inject RediSearch syntax characters (like the OR operator `|`) into filter values to bypass thread isolation controls and access checkpoint data from other users or threads they shouldn't be able to see.

Tech firms must remove ‘revenge porn’ in 48 hours or risk being blocked, says Starmer

infonews
policysafety

GHSA-w52v-v783-gw97: Ghost has a SQL injection in Content API

criticalvulnerability
security
Feb 18, 2026
CVE-2026-26980

Ghost's Content API had a SQL injection vulnerability (a flaw where attackers can insert malicious database commands into user input) that let unauthenticated attackers read any data from the database. The vulnerability affected Ghost versions 3.24.0 through 6.19.0.

Scam Abuses Gemini Chatbots to Convince People to Buy Fake Crypto

infonews
securitysafety

CVE-2025-12343: A flaw was found in FFmpeg’s TensorFlow backend within the libavfilter/dnn_backend_tf.c source file. The issue occurs in

lowvulnerability
security
Feb 18, 2026
CVE-2025-12343

FFmpeg's TensorFlow backend has a bug where a task object gets freed twice in certain error situations, causing a double-free condition (a memory safety error where the same memory is released multiple times). This can crash FFmpeg or programs using it when processing TensorFlow-based DNN models (deep neural network models), resulting in a denial-of-service attack, but it does not allow attackers to run arbitrary code.

Is your startup’s check engine light on? Google Cloud’s VP explains what to do

infonews
industry
Feb 18, 2026

This article discusses challenges startup founders face when building AI applications on cloud platforms, including managing costs, making early infrastructure decisions, and scaling beyond free trial periods. Google Cloud's VP of startups explains how founders can balance the speed needed to show progress with the long-term consequences of their technology choices.

AI platforms can be abused for stealthy malware communication

highnews
securitysafety
Previous47 / 159Next
CNBC Technology
CNBC Technology
NVD/CVE Database
MIT Technology Review
Feb 19, 2026

French President Emmanuel Macron defended Europe's AI regulations and pledged stronger protections for children from digital abuse, citing concerns about AI chatbots being misused to create harmful content involving minors and about a small number of companies controlling most AI technology. His comments came after global criticism of Elon Musk's Grok chatbot being used to generate tens of thousands of sexualized images of children.

The Guardian Technology
TechCrunch
TechCrunch

Fix: Update OpenClaw to version 2026.2.15 or later. The fix preserves array ordering during hash normalization, so only object key ordering remains normalized. This ensures that configuration changes affecting array order are properly detected and containers are recreated as needed.

GitHub Advisory Database

Fix: Update to OpenClaw version 2026.2.15 or later. The fix implements: (1) Add and enforce `tools.sessions.visibility` configuration with options `self`, `tree`, `agent`, or `all`, defaulting to `tree` to limit what sessions users can see. (2) Keep sandbox clamping behavior to restrict sandboxed runs to spawned/session-tree visibility. (3) Resolve Telegram webhook secret from account config fallback in monitor webhook startup. See commit `c6c53437f7da033b94a01d492e904974e7bda74c`.

GitHub Advisory Database

Fix: Upgrade to openclaw >= 2026.2.15 when released. Additionally, rotate the Telegram bot token if it may have been exposed.

GitHub Advisory Database

Fix: Upgrade to OpenClaw version 2026.2.15 or later. The fix includes runtime enforcement when building Docker arguments, validation of dangerous settings like `network=host` and `unconfined` security profiles, and security audits to detect dangerous sandbox Docker configurations.

GitHub Advisory Database

Fix: Update to OpenClaw version 2026.2.15 or later. The fix sanitizes the workspace path by stripping Unicode control/format characters and explicit line/paragraph separators before embedding it into any LLM prompt output, and applies the same sanitization during workspace path resolution as an additional defensive measure.

GitHub Advisory Database

Fix: The 1.0.2 patch introduces an `escapeRediSearchTagValue()` function that properly escapes all RediSearch special characters (- . < > { } [ ] " ' : ; ! @ # $ % ^ & * ( ) + = ~ | \ ? /) by prefixing them with backslashes, and applies this escaping to all filter keys used in query construction.

GitHub Advisory Database
Feb 18, 2026

The UK government plans to require technology companies to remove deepfake nudes and revenge porn (nonconsensual intimate images) within 48 hours of being flagged, or face fines up to 10% of their revenue or being blocked in the UK. Ofcom (the UK media regulator) will enforce these rules, and victims can report images directly to companies or to Ofcom, which will alert multiple platforms at once. The government will also explore using digital watermarks to automatically detect and flag reposted nonconsensual images, and create new guidance for internet providers to block sites that host such content.

Fix: Companies will be legally required to remove nonconsensual intimate images no more than 48 hours after being flagged. Ofcom will explore ways to add digital watermarks to flagged images to allow automatic detection when reposted. Victims can report images either directly to tech firms or to Ofcom (which will trigger alerts across multiple platforms). Internet providers will receive new guidance on blocking hosting for sites specializing in nonconsensual real or AI-generated explicit content. Platforms already use hash matching (a process that assigns videos a unique digital signature) for child sexual abuse content, and this same technology could be applied to nonconsensual intimate imagery.

The Guardian Technology

Fix: Update to Ghost v6.19.1, which contains the fix. As a temporary workaround, a reverse proxy or WAF (web application firewall, a security tool that filters incoming requests) rule can block Content API requests containing `slug%3A%5B` or `slug:[` in the query string filter parameter, though this may break legitimate slug filter functionality.

GitHub Advisory Database
Feb 18, 2026

Scammers created a fake cryptocurrency presale website for a non-existent "Google Coin" that uses an AI chatbot (similar to Google's Gemini) to persuade visitors to buy the fake digital currency, with payments going directly to the attackers. The chatbot makes a convincing sales pitch to trick people into sending money to the scammers.

Dark Reading
NVD/CVE Database
TechCrunch
Feb 18, 2026

Researchers at Check Point discovered that AI assistants with web browsing abilities, like Grok and Microsoft Copilot, can be abused as hidden communication relays for malware. Attackers can instruct these AI services to fetch attacker-controlled URLs and relay commands back to malware, creating a stealthy two-way communication channel (C2, or command-and-control) that bypasses normal security detection because the AI platforms are trusted by security tools. The proof-of-concept attack works without requiring API keys or accounts, making it harder for defenders to block.

BleepingComputer