aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
46
[LAST_7D]
183
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 171/273
VIEW ALL
01

CVE-2025-49150: Cursor is a code editor built for programming with AI. Prior to 0.51.0, by default, the setting json.schemaDownload.enab

security
Jun 11, 2025

Cursor, a code editor designed for AI-assisted programming, had a security flaw in versions before 0.51.0 where JSON files could automatically trigger web requests without user approval. An attacker could exploit this, especially after a prompt injection attack (tricking the AI with hidden instructions in its input), to make the AI agent send data to a malicious website.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: The vulnerability is fixed in version 0.51.0. Users should update to this version or later.

NVD/CVE Database
02

CVE-2025-32711: Ai command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.

security
Jun 11, 2025

CVE-2025-32711 is a command injection vulnerability (a weakness where an attacker tricks a program into running unintended commands) in Microsoft 365 Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability has a CVSS severity score of 4.0 (a moderate rating on a 0-10 scale where 10 is most severe). Microsoft has published information about this vulnerability, but the provided source does not contain specific technical details about the attack or its impact.

NVD/CVE Database
03

CVE-2025-49131: FastGPT is an open-source project that provides a platform for building, deploying, and operating AI-driven workflows an

security
Jun 9, 2025

FastGPT is an open-source platform for building AI workflows and chatbots that uses a sandbox (an isolated container designed to safely run untrusted code). Versions before 4.9.11 had weak isolation that allowed attackers to escape the sandbox by using overly permissive syscalls (system calls, which are requests programs make to the operating system), letting them read files, modify files, and bypass security restrictions. The vulnerability is fixed in version 4.9.11 by limiting which system calls are allowed to a safer set.

Fix: Update to version 4.9.11 or later. According to the source, this version patches the vulnerability by restricting the allowed system calls to a safer subset and adding additional descriptive error messaging.

NVD/CVE Database
04

Promises and Perils of Generative AI in Cybersecurity

securityresearch
Jun 9, 2025

Generative AI (AI systems that create new text, code, or images) is a double-edged sword in cybersecurity, helping both defenders and attackers. The case study of a fictional insurance company shows how GenAI can be used to launch cyberattacks (malicious attempts to breach computer systems) and also to defend against them, creating a difficult choice for IT leaders about whether to use AI as a defensive tool or risk falling behind attackers who already have it.

AIS eLibrary (Journal of AIS, CAIS, etc.)
05

How to Operationalize Responsible Use of Artificial Intelligence

policyresearch
Jun 9, 2025

As AI development has grown rapidly, organizations struggle with how to actually put responsible AI practices into action beyond just making promises about it. This article describes how two organizations created a five-phase process to embed responsibility pledges (formal commitments to use AI ethically) into their daily practices using a systems approach (treating responsibility as interconnected parts of the whole organization rather than isolated efforts).

AIS eLibrary (Journal of AIS, CAIS, etc.)
06

Hosting COM Servers with an MCP Server

security
Jun 9, 2025

The mcp-com-server is a tool that connects the Model Context Protocol (MCP, a standard for AI systems to interact with external tools) to COM (Component Object Model, Microsoft's decades-old system for sharing functionality across programs on Windows). This allows an AI like Claude to automate Windows and Office tasks, such as creating Excel files and sending emails, by dynamically discovering and controlling COM objects. The main security risk is that COM can access dangerous operations like file system access, so the server uses an allowlist (a list of approved COM objects that are permitted to run) to restrict which COM objects can be instantiated.

Fix: The source explicitly mentions two mitigations: (1) An Allow List for CLSIDs and ProgIDs, where 'the MCP server will instantiate allow listed COM objects' and notes this 'could be expanded to include specific interfaces/methods as well,' and (2) 'Confirmation Dialogs' where 'Claude shows an Allow / Deny button before invoking custom tools by default' to 'make sure a human remains in the loop,' though the source notes this 'can be disabled, but also re-enabled in the Claude Settings per MCP tool.'

Embrace The Red
07

CVE-2025-49619: Skyvern through 0.1.85 is vulnerable to server-side template injection (SSTI) in the Prompt field of workflow blocks suc

security
Jun 7, 2025

Skyvern through version 0.1.85 has a vulnerability where attackers can inject malicious code into the Prompt field of workflow blocks through SSTI (server-side template injection, where untrusted input is processed as code by the server's template engine). Authenticated users can craft special expressions in Jinja2 templates (a template system that evaluates code on the server) that aren't properly cleaned up, allowing them to execute commands on the server without direct feedback, a capability known as blind RCE (remote code execution).

Fix: A fix is referenced in the GitHub commit db856cd8433a204c8b45979c70a4da1e119d949d in the Skyvern repository, but the source text does not explicitly describe what the fix does or provide a specific patched version number to upgrade to.

NVD/CVE Database
08

CVE-2025-5018: The Hive Support plugin for WordPress is vulnerable to unauthorized access and modification of data due to a missing cap

security
Jun 6, 2025

The Hive Support plugin for WordPress has a security flaw in versions up to 1.2.4 where two functions lack capability checks (security checks that verify user permissions). This allows attackers with basic Subscriber-level accounts to read and change the site's OpenAI API key, inspect data, and modify how the AI chatbot behaves.

NVD/CVE Database
09

Balancing Velocity and Vulnerability with llamafile

securitysafety
Jun 4, 2025

This content is a collection of blog post titles and announcements from Palo Alto Networks about AI security, covering topics like agentic AI (AI systems that can autonomously take actions), container security, and operational technology (OT, the systems that control physical infrastructure) security. The posts discuss vulnerabilities in autonomous AI systems, the need for contextual red teaming (security testing tailored to specific use cases), and various security products like Prisma AIRS.

Protect AI Blog
10

CVE-2025-48957: AstrBot is a large language model chatbot and development framework. A path traversal vulnerability present in versions

security
Jun 2, 2025

AstrBot, a chatbot and development framework powered by large language models (LLMs, AI systems trained on large amounts of text data), has a path traversal vulnerability (a flaw that lets attackers access files they shouldn't be able to reach) in versions 3.4.4 through 3.5.12 that could expose sensitive information like API keys (credentials used to access external services) and passwords. The vulnerability was fixed in version 3.5.13.

Fix: Upgrade to version 3.5.13 or later. As a temporary workaround, users can edit the `cmd_config.json` file to disable the dashboard feature.

NVD/CVE Database
Prev1...169170171172173...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026