aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
42
[LAST_7D]
181
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 101/273
VIEW ALL
01

Hackers can turn Grok, Copilot into covert command-and-control channels, researchers warn

security
Feb 19, 2026

Researchers have discovered that attackers can abuse web-based AI assistants like Grok and Microsoft Copilot to create command-and-control channels (hidden communication paths between malware and attackers), hiding malicious traffic within normal AI service traffic that organizations typically allow through their networks without inspection. This technique works because many companies grant unrestricted access to popular AI platforms by default, allowing malware to receive instructions through the AI assistants while remaining undetected.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: Security leaders should apply governance discipline similar to high-risk SaaS (software-as-a-service, cloud-based software) platforms. Specifically, organizations should start by creating a comprehensive inventory of all AI tools in use and establishing a clear policy framework for approving and enabling them. The source text is incomplete but indicates that implementing AI-specific controls was being recommended; however, the full recommendation is cut off and not available in the provided content.

CSO Online
02

Dueling PACs take center stage in midterm elections over AI regulation

policy
Feb 19, 2026

Political action committees (PACs, organizations that raise money to support political candidates) backed by AI companies are spending millions of dollars to influence elections on AI regulation policy. Jobs and Democracy PAC, supported by Anthropic, is running ads for candidates who favor stronger AI regulation like New York's RAISE Act (which requires large AI developers to publish safety protocols and report serious misuse), while competing PACs backed by venture capitalists and other AI companies are running ads against these candidates.

CNBC Technology
03

Chinese tech companies progress 'remarkable,' OpenAI's Altman tells CNBC

industry
Feb 19, 2026

OpenAI's Sam Altman told CNBC that Chinese tech companies are making "remarkable" progress in developing artificial general intelligence (AGI, where AI systems match human capabilities), with some companies approaching the technological frontier while others still lag behind. OpenAI is exploring new revenue streams, including advertising within ChatGPT, with plans to initially test ads in the U.S. before expanding to other markets. The company remains focused on rapid growth rather than immediate profitability.

CNBC Technology
04

CVE-2026-25338: Missing Authorization vulnerability in Ays Pro AI ChatBot with ChatGPT and Content Generator by AYS ays-chatgpt-assistan

security
Feb 19, 2026

CVE-2026-25338 is a missing authorization vulnerability in the Ays Pro AI ChatBot plugin (versions up to 2.7.4), meaning the software fails to properly check whether users have permission to access certain features. This security flaw allows attackers to exploit incorrectly configured access controls (the rules that decide who can do what in the software).

NVD/CVE Database
05

What it takes to make agentic AI work in retail

industry
Feb 19, 2026

This podcast discusses how a large US retail company uses agentic AI (AI systems that can take independent actions to complete tasks) across their software development process, including validating requirements, creating and reviewing test cases, and resolving issues faster. The organization emphasizes maintaining human oversight, strict governance rules, and measurable quality standards while deploying these AI agents.

MIT Technology Review
06

Macron defends EU AI rules and vows crackdown on child ‘digital abuse’

policysafety
Feb 19, 2026

French President Emmanuel Macron defended Europe's AI regulations and pledged stronger protections for children from digital abuse, citing concerns about AI chatbots being misused to create harmful content involving minors and about a small number of companies controlling most AI technology. His comments came after global criticism of Elon Musk's Grok chatbot being used to generate tens of thousands of sexualized images of children.

The Guardian Technology
07

OpenAI taps Tata for 100MW AI data center capacity in India, eyes 1GW

industry
Feb 19, 2026

OpenAI has partnered with India's Tata Group to build AI data center capacity starting with 100 megawatts and scaling to 1 gigawatt, allowing OpenAI to run advanced models within India while meeting local data residency and compliance requirements. The partnership includes deploying ChatGPT Enterprise across Tata's workforce and using OpenAI's tools for AI-native software development. This expansion supports OpenAI's growth in India, where it has over 100 million weekly users, and helps enterprises that must process sensitive data locally.

TechCrunch
08

OpenAI deepens India push with Pine Labs fintech partnership

industry
Feb 18, 2026

OpenAI has partnered with Pine Labs, an Indian fintech company, to integrate OpenAI's APIs (application programming interfaces, which are software tools that let companies connect AI into their existing systems) into Pine Labs' payments and commerce platform. The partnership aims to automate financial workflows like settlement, invoicing, and reconciliation, with Pine Labs already using AI internally to reduce daily settlement processing from hours to minutes. OpenAI is expanding its presence in India beyond ChatGPT by embedding its technology into enterprise and infrastructure systems across the country's large developer base.

TechCrunch
09

GHSA-xxvh-5hwj-42pp: OpenClaw's sandbox config hash sorted primitive arrays and suppressed needed container recreation

security
Feb 18, 2026

OpenClaw's sandbox configuration had a bug where the `normalizeForHash` function (a process that converts configuration settings into a unique identifier) was sorting arrays containing simple values, causing different array orders to produce identical hashes. This meant that sandbox containers (isolated software environments) weren't being recreated when only the order of configuration settings like DNS or file bindings changed, potentially leaving stale containers in use.

Fix: Update OpenClaw to version 2026.2.15 or later. The fix preserves array ordering during hash normalization, so only object key ordering remains normalized. This ensures that configuration changes affecting array order are properly detected and containers are recreated as needed.

GitHub Advisory Database
10

GHSA-6hf3-mhgc-cm65: OpenClaw session tool visibility hardening and Telegram webhook secret fallback

security
Feb 18, 2026

OpenClaw, a session management tool, had a visibility issue in shared multi-user environments where session tools (like `sessions_list` and `sessions_history`) could give users access to other people's session data when they shouldn't have it. Additionally, Telegram webhook mode didn't properly use account-level secret settings as a fallback. The risk is mainly in environments where multiple people share the same agent and don't fully trust each other.

Fix: Update to OpenClaw version 2026.2.15 or later. The fix implements: (1) Add and enforce `tools.sessions.visibility` configuration with options `self`, `tree`, `agent`, or `all`, defaulting to `tree` to limit what sessions users can see. (2) Keep sandbox clamping behavior to restrict sandboxed runs to spawned/session-tree visibility. (3) Resolve Telegram webhook secret from account config fallback in monitor webhook startup. See commit `c6c53437f7da033b94a01d492e904974e7bda74c`.

GitHub Advisory Database
Prev1...99100101102103...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026