aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
39
[LAST_7D]
174
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 97/272
VIEW ALL
01

Nvidia is in talks to invest up to $30 billion in OpenAI, source says

industry
Feb 19, 2026

Nvidia is in talks to invest up to $30 billion in OpenAI as part of a funding round that could value the AI startup at $730 billion, separate from a previously announced $100 billion infrastructure agreement. This new investment is not tied to any specific deployment milestones, and the deal is still under negotiation with details subject to change.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

CNBC Technology
02

Google’s new Gemini Pro model has record benchmark scores — again

industry
Feb 19, 2026

Google released Gemini Pro 3.1, a new large language model (LLM, an AI trained on vast amounts of text to understand and generate language), which achieved record scores on independent performance benchmarks like Humanity's Last Exam and APEX-Agents. The model is currently in preview and represents a major improvement over the previous Gemini 3 version, particularly for agentic work (tasks where the AI breaks down complex problems into multiple steps and executes them).

TechCrunch
03

EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects

policysafety
Feb 19, 2026

The Electronic Frontier Foundation (EFF) introduced a policy for open-source contributions that requires developers to understand any code they submit and to write comments and documentation themselves, even if they use LLMs (large language models, AI systems trained to generate human-like text) to help. While the EFF does not completely ban LLM-assisted code, they require disclosure of LLM use because AI-generated code can contain hidden bugs that scale poorly and create extra work for reviewers, especially in under-resourced teams.

Fix: The source explicitly states that contributors must disclose when they use LLM tools. The EFF's policy requires that: (1) contributors understand the code they submit, and (2) comments and documentation be authored by a human rather than generated by an LLM. No technical patch, update, or automated mitigation is discussed in the source.

EFF Deeplinks Blog
04

CVE-2026-26320: OpenClaw is a personal AI assistant. OpenClaw macOS desktop client registers the `openclaw://` URL scheme. For `openclaw

security
Feb 19, 2026

OpenClaw is a personal AI assistant with a macOS desktop client that can be triggered through deep links (special URLs that open apps). In versions 2026.2.6 through 2026.2.13, attackers could hide malicious commands by padding messages with whitespace, so users would see only a harmless preview but the full hidden command would execute when they clicked 'Run'. This works because the app only displayed the first 240 characters in the confirmation dialog before executing the entire message.

Fix: The issue is fixed in version 2026.2.14. The source also mentions mitigations: do not approve unexpected 'Run OpenClaw agent?' prompts triggered while browsing untrusted websites, and use deep links only with a valid authentication key for trusted personal automations.

NVD/CVE Database
05

PromptSpy is the first known Android malware to use generative AI at runtime

securitysafety
Feb 19, 2026

Researchers discovered PromptSpy, the first known Android malware that uses generative AI (specifically Google's Gemini model) during its operation to help it persist on infected devices by adapting how it locks itself in the Recent Apps list across different Android manufacturers. Beyond this AI feature, PromptSpy functions as spyware with a VNC module (remote access tool) that allows attackers to view and control the device, intercept passwords, record screens, and capture installed apps. The malware also uses invisible UI overlays to block users from uninstalling it or disabling its permissions.

Fix: According to ESET, victims must reboot into Android Safe Mode so that third-party apps are disabled and cannot block the malware's uninstall.

BleepingComputer
06

US dominance of agentic AI at the heart of new NIST initiative

policysafety
Feb 19, 2026

NIST announced the AI Agent Standards Initiative to develop standards and safeguards for agentic AI (autonomous AI systems that can perform tasks independently), with the goal of building public confidence and ensuring safe adoption. The initiative faces criticism for moving too slowly, as real-world security incidents involving agentic AI (like the EchoLeak vulnerability in Microsoft 365 Copilot and the OpenClaw agent that can let attackers access user data) are already occurring faster than standards can be developed.

CSO Online
07

CVE-2026-26286: SillyTavern is a locally installed user interface that allows users to interact with text generation large language mode

security
Feb 19, 2026

SillyTavern is a locally installed interface for interacting with text generation AI models and other AI tools. Versions before 1.16.0 had an SSRF vulnerability (server-side request forgery, where an attacker can make the server send requests to internal networks or services it shouldn't access), allowing authenticated users to read responses from internal services and private network resources through the asset download feature.

Fix: The vulnerability has been patched in version 1.16.0 by introducing a whitelist domain check for asset download requests. It can be reviewed and customized by editing the `whitelistImportDomains` array in the `config.yaml` file.

NVD/CVE Database
08

YouTube’s latest experiment brings its conversational AI tool to TVs

industry
Feb 19, 2026

YouTube is expanding its conversational AI tool to smart TVs, gaming consoles, and streaming devices, allowing users to ask questions about video content using an 'Ask' button or voice commands without pausing playback. The feature, currently available to select users over 18 in five languages, lets viewers get instant answers about things like recipe ingredients or song background information. This expansion reflects YouTube's growing dominance in TV viewing, with competitors like Amazon, Roku, and Netflix also developing their own conversational AI features for television.

TechCrunch
09

GHSA-fh3f-q9qw-93j9: OpenClaw replaced a deprecated sandbox hash algorithm

security
Feb 19, 2026

OpenClaw, an npm package, used SHA-1 (an outdated hashing algorithm with known weaknesses) to create identifiers for Docker and browser sandbox configurations. An attacker could exploit hash collisions (two different configurations producing the same hash) to trick the system into reusing the wrong sandbox, leading to cache poisoning (corrupting stored data) and unsafe sandbox reuse.

Fix: Update to version 2026.2.15 or later. The fix replaces SHA-1 with SHA-256 (a stronger hashing algorithm with better collision resistance) for generating these sandbox identifiers.

GitHub Advisory Database
10

GHSA-xjw9-4gw8-4rqx: Microsoft Semantic Kernel InMemoryVectorStore filter functionality vulnerable to remote code execution

security
Feb 19, 2026

Microsoft's Semantic Kernel Python SDK has an RCE vulnerability (remote code execution, where an attacker can run commands on a system they don't own) in the `InMemoryVectorStore` filter functionality, which allows attackers to execute arbitrary code. The vulnerability affects the library used for building AI applications with vector storage (a database that stores AI embeddings, which are numerical representations of data).

Fix: Upgrade to python-1.39.4 or higher. As a temporary workaround, avoid using `InMemoryVectorStore` for production scenarios.

GitHub Advisory Database
Prev1...9596979899...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026