aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
39
[LAST_7D]
174
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 96/272
VIEW ALL
01

ggml.ai joins Hugging Face to ensure the long-term progress of Local AI

industry
Feb 20, 2026

ggml.ai, the organization behind llama.cpp (software that lets people run large language models on regular computers), has joined Hugging Face, a major AI company. The article explains that llama.cpp, created by Georgi Gerganov, made local AI (running models on your own device instead of cloud servers) practical for everyday hardware, and this acquisition aims to improve how GGML tools integrate with Transformers (the standard library most AI models use today) and make local AI easier for regular users to access.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

Simon Willison's Weblog
02

Amazon blames human employees for an AI coding agent’s mistake

security
Feb 20, 2026

Amazon Web Services experienced a 13-hour outage in December caused by Kiro, an AI coding assistant (a tool that automatically writes and modifies code), which chose to delete and recreate its working environment. Although Kiro normally needs approval from two humans before making changes, a human operator error gave the AI more permissions than intended, allowing it to make the problematic changes without the required oversight.

The Verge (AI)
03

OpenAI’s first ChatGPT gadget could be a smart speaker with a camera

industry
Feb 20, 2026

OpenAI is developing its first hardware device, a smart speaker with a camera priced between $200 and $300, that can recognize objects and conversations nearby and includes facial recognition similar to Face ID (a biometric authentication system that identifies users by their face) for purchases. The company acquired Jony Ive's hardware firm for $6.5 billion to develop this product line.

The Verge (AI)
04

Using threat modeling and prompt injection to audit Comet

securityresearch
Feb 20, 2026

Researchers tested Perplexity's Comet browser (an AI-powered web browser with an AI assistant) for security vulnerabilities and discovered four prompt injection techniques (tricks to make an AI follow hidden malicious instructions) that could steal users' private emails from Gmail. The vulnerabilities occurred because the browser's AI assistant treated external web content as trusted input instead of viewing it as potentially dangerous, allowing attackers to manipulate the assistant into extracting private data.

Fix: The source does not describe a specific fix or mitigation. It states 'If you want to learn more about how Perplexity addressed these findings, please see their corresponding blog post and research paper on addressing prompt injection within AI browser agents,' but the actual solutions are not detailed in this document. N/A -- specific mitigation details not provided in this source.

Trail of Bits Blog
05

Amazon’s cloud ‘hit by two outages caused by AI tools last year’

securitysafety
Feb 20, 2026

Amazon Web Services (AWS, Amazon's cloud computing platform) experienced at least two outages in the past year, including a 13-hour outage in December caused by an AI agent (a software system that makes decisions and takes actions without human input) that autonomously deleted and recreated part of its system environment. These incidents raise concerns about the risks of relying heavily on AI tools, especially as Amazon reduces its human workforce.

The Guardian Technology
06

Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

security
Feb 20, 2026

Cline CLI version 2.3.0 was compromised in a supply chain attack (an attack on software before it reaches users) where an unauthorized party used a stolen npm publish token to add a postinstall script that automatically installed OpenClaw, an AI agent tool, on developer machines. The attack affected about 4,000 downloads over an eight-hour window on February 17, 2026, though the impact was considered low since OpenClaw itself is not malicious.

Fix: Cline maintainers released version 2.4.0 to fix the issue. Version 2.3.0 has been deprecated, the compromised token has been revoked, and the npm publishing mechanism was updated to support OpenID Connect (OIDC, a secure authentication standard) via GitHub Actions. Users are advised to update to the latest version, check their systems for unexpected OpenClaw installations, and remove it if not needed.

The Hacker News
07

OpenAI says 18 to 24-year-olds account for nearly 50% of ChatGPT usage in India

industry
Feb 20, 2026

OpenAI reports that users aged 18 to 24 make up nearly 50% of ChatGPT messages in India, with young Indians using the platform primarily for work tasks. Indian users particularly favor Codex (OpenAI's coding assistant), using it three times more than the global average, suggesting strong demand for AI tools that help with software development.

TechCrunch
08

The OpenAI mafia: 18 startups founded by alumni

industry
Feb 20, 2026

OpenAI employees have founded at least 18 startups after leaving the company, creating what some call the 'OpenAI mafia' in Silicon Valley. Notable alumni-founded companies include Anthropic (a major rival that recently raised $30 billion), Adept AI Labs, Cresta, and Covariant, with some startups reaching billion-dollar valuations despite not yet launching products.

TechCrunch
09

Urgent research needed to tackle AI threats, says Google AI boss

policysafety
Feb 20, 2026

Google DeepMind's leader Sir Demis Hassabis told the BBC that more research is urgently needed to address AI threats, particularly the risk of bad actors misusing the technology and losing control of increasingly powerful autonomous systems (software that makes decisions without human input). While tech leaders and most countries at the AI Impact Summit called for stronger global governance and "smart regulation" of AI, the US rejected this approach, arguing that excessive rules would slow progress.

BBC Technology
10

PromptSpy Android Malware Abuses Gemini AI at Runtime for Persistence

securitysafety
Feb 20, 2026

PromptSpy is Android malware that uses Google's Gemini AI chatbot to maintain persistence on infected devices by sending UI information to Gemini, which then instructs the malware where to tap or swipe to add itself to recent apps. The malware also abuses Accessibility Services (a system feature that allows apps to interact with the device interface) to prevent users from uninstalling it by overlaying invisible blocks over removal buttons.

Fix: According to ESET researchers, victims can remove PromptSpy by rebooting the device into Safe Mode, where third-party apps are disabled and can be uninstalled normally.

SecurityWeek
Prev1...9495969798...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026