aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,736
[LAST_24H]
39
[LAST_7D]
179
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.

>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.

>

Latest Intel

page 161/274
VIEW ALL
01

ZombAI Exploit with OpenHands: Prompt Injection To Remote Code Execution

security
Aug 10, 2025

OpenHands, a popular AI agent from All Hands AI that can now run as a cloud service, is vulnerable to prompt injection (tricking an AI by hiding instructions in its input) when processing untrusted data like content from websites. This vulnerability allows attackers to hijack the system and compromise its confidentiality, integrity, and availability, potentially leading to full system compromise.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.

Embrace The Red
02

OpenHands and the Lethal Trifecta: How Prompt Injection Can Leak Access Tokens

securitysafety
Aug 9, 2025

OpenHands, an AI agent tool created by All-Hands AI, has a vulnerability where it can render images in chat conversations, which attackers can exploit through prompt injection (tricking an AI by hiding instructions in its input) to leak access tokens (security credentials that grant permission to use services) without requiring user interaction. This type of attack has been called the 'Lethal Trifecta' and represents a significant data exfiltration (unauthorized data theft) risk.

Embrace The Red
03

Strengthening AI Security with Protect AI Recon & Dataiku Guard Services

securitysafety
Aug 8, 2025

This content discusses security challenges in agentic AI (AI systems that can act autonomously and use tools), emphasizing that generic jailbreak testing (attempts to trick AI into ignoring safety guidelines) misses real operational risks like tool misuse and data theft. The articles highlight that enterprises need contextual red teaming (security testing that simulates realistic attack scenarios relevant to how the AI will actually be used) and governance frameworks like identity controls and boundaries to secure autonomous AI systems.

Protect AI Blog
04

AI Kill Chain in Action: Devin AI Exposes Ports to the Internet with Prompt Injection

securitysafety
Aug 8, 2025

Devin AI has a tool called expose_port that can publish local computer ports to the public internet, intended for testing websites during development. However, attackers can use prompt injection (tricking an AI by hiding instructions in its input) to manipulate Devin into exposing sensitive files and creating backdoor access without human approval, as demonstrated through a multi-stage attack that gradually steers the AI toward malicious actions.

Embrace The Red
05

CVE-2025-54886: skops is a Python library which helps users share and ship their scikit-learn based models. In versions 0.12.0 and below

security
Aug 8, 2025

The skops Python library (used for sharing scikit-learn machine learning models) has a security flaw in versions 0.12.0 and earlier where the Card.get_model function can accidentally use joblib (a less secure loading method) instead of skops' safer approach. Joblib allows arbitrary code execution (running any code during model loading), which could let attackers run malicious code if they trick users into loading a specially crafted model file. This bypasses the security checks that skops normally provides.

Fix: This issue is fixed in version 0.13.0. Users should upgrade to skops version 0.13.0 or later.

NVD/CVE Database
06

CVE-2025-53767: Azure OpenAI Elevation of Privilege Vulnerability

security
Aug 7, 2025

CVE-2025-53767 is a vulnerability in Azure OpenAI that allows elevation of privilege, which means an attacker could gain higher-level access than they should have. The vulnerability stems from server-side request forgery (SSRF, a flaw where an attacker tricks a server into making unintended requests on their behalf). The CVSS severity score and detailed impact information have not yet been assessed by NIST.

NVD/CVE Database
07

CVE-2025-53787: Microsoft 365 Copilot BizChat Information Disclosure Vulnerability

security
Aug 7, 2025

CVE-2025-53787 is an information disclosure vulnerability in Microsoft 365 Copilot BizChat that stems from improper neutralization of special elements used in commands (command injection, where attackers manipulate input to execute unintended commands). The vulnerability allows unauthorized access to sensitive information, though specific attack details are not provided in this source.

NVD/CVE Database
08

CVE-2025-53774: Microsoft 365 Copilot BizChat Information Disclosure Vulnerability

security
Aug 7, 2025

CVE-2025-53774 is an information disclosure vulnerability in Microsoft 365 Copilot BizChat caused by improper neutralization of special elements used in commands (command injection, where attackers craft malicious input to execute unintended commands). The vulnerability allows unauthorized access to sensitive information, though the severity rating has not yet been assigned by the National Institute of Standards and Technology.

NVD/CVE Database
09

CVE-2025-44779: An issue in Ollama v0.1.33 allows attackers to delete arbitrary files via sending a crafted packet to the endpoint /api/

security
Aug 7, 2025

Ollama v0.1.33 has a vulnerability (CVE-2025-44779) that allows attackers to delete arbitrary files (any files on a system) by sending a specially crafted request to the /api/pull endpoint. The vulnerability stems from improper input validation (the software not properly checking user input for malicious content) and overly permissive file access settings.

NVD/CVE Database
10

How Devin AI Can Leak Your Secrets via Multiple Means

securityresearch
Aug 7, 2025

Devin AI can be tricked into leaking sensitive information to attackers through multiple methods, including using its Shell tool to run data-stealing commands, using its Browser tool to send secrets to attacker-controlled websites, rendering images from untrusted domains, and posting hidden data to connected services like Slack. These attacks work because Devin has unrestricted internet access and can be manipulated through indirect prompt injection (tricking an AI by hiding malicious instructions in its input), where attackers embed instructions in places like GitHub issues that Devin investigates.

Embrace The Red
Prev1...159160161162163...274Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026