aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
25
[LAST_7D]
166
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 193/269
VIEW ALL
01

CVE-2024-42474: Streamlit is a data oriented application development framework for python. Snowflake Streamlit open source addressed a s

security
Aug 12, 2024

Streamlit (a Python framework for building data applications) had a path traversal vulnerability (a flaw that lets attackers access files outside their intended directory) in its static file sharing feature on Windows. An attacker could exploit this to steal the password hash (an encrypted version of a password) of the Windows user running Streamlit.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Fix: The vulnerability was patched on Jul 25, 2024, as part of Streamlit open source version 1.37.0.

NVD/CVE Database
02

CVE-2024-6706: Attackers can craft a malicious prompt that coerces the language model into executing arbitrary JavaScript in the contex

security
Aug 7, 2024

CVE-2024-6706 is a vulnerability where attackers can write malicious prompts that trick a language model into running arbitrary JavaScript (code that executes in a web browser) on a webpage. This is a type of cross-site scripting (XSS) attack, where untrusted input is not properly cleaned before being displayed on a web page, allowing attackers to inject malicious code.

NVD/CVE Database
03

CVE-2024-38206: An authenticated attacker can bypass Server-Side Request Forgery (SSRF) protection in Microsoft Copilot Studio to leak s

security
Aug 6, 2024

CVE-2024-38206 is a vulnerability in Microsoft Copilot Studio where an authenticated attacker (someone with valid login credentials) can bypass SSRF protection (security that prevents a server from being tricked into making unwanted network requests) to leak sensitive information over a network.

Fix: Patch available from Microsoft Corporation at https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-38206

NVD/CVE Database
04

CVE-2024-6331: stitionai/devika main branch as of commit cdfb782b0e634b773b10963c8034dc9207ba1f9f is vulnerable to Local File Read (LFI

security
Aug 3, 2024

A vulnerability in the stitionai/devika AI project allows attackers to read sensitive files on a computer through prompt injection (tricking an AI by hiding malicious instructions in its input). The problem occurs because Google Gemini's safety filters were disabled, which normally prevent harmful outputs, leaving the system open to commands like reading `/etc/passwd` (a file containing user account information).

NVD/CVE Database
05

CVE-2024-38791: Server-Side Request Forgery (SSRF) vulnerability in Jordy Meow AI Engine: ChatGPT Chatbot allows Server Side Request For

security
Aug 1, 2024

CVE-2024-38791 is a server-side request forgery (SSRF, a flaw where an attacker tricks a server into making unwanted requests to other systems) vulnerability in the Jordy Meow AI Engine: ChatGPT Chatbot plugin that affects versions up to 2.4.7. The vulnerability allows attackers to exploit this weakness to perform unauthorized actions by manipulating the plugin's server requests.

NVD/CVE Database
06

CVE-2024-41950: Haystack is an end-to-end LLM framework that allows you to build applications powered by LLMs, Transformer models, vecto

security
Jul 31, 2024

Haystack is a framework for building applications with LLMs (large language models) and AI tools, but versions before 2.3.1 have a critical vulnerability where attackers can execute arbitrary code if they can create and render Jinja2 templates (template engines that generate dynamic text). This affects Haystack clients that allow users to create and run Pipelines, which are workflows that process data through multiple steps.

Fix: The vulnerability has been fixed in Haystack version 2.3.1. Users should upgrade to this version or later.

NVD/CVE Database
07

CVE-2023-33976: TensorFlow is an end-to-end open source platform for machine learning. `array_ops.upper_bound` causes a segfault when no

security
Jul 30, 2024

A bug in TensorFlow (an open source platform for building machine learning models) causes a segfault (a crash where the program tries to access memory it shouldn't) when the `array_ops.upper_bound` function receives input that is not a rank 2 tensor (a two-dimensional array of numbers).

Fix: The fix is included in TensorFlow 2.13 and has also been applied to TensorFlow 2.12 through a cherrypick commit (applying a specific code change to an older version).

NVD/CVE Database
08

CVE-2024-7297: Langflow versions prior to 1.0.13 suffer from a Privilege Escalation vulnerability, allowing a remote and low privileged

security
Jul 30, 2024

Langflow versions before 1.0.13 have a privilege escalation vulnerability (a security flaw where an attacker gains higher access rights than they should have) that lets a remote attacker with low privileges become a super admin by sending a specially crafted request to the '/api/v1/users' endpoint using mass assignment (a technique where an attacker modifies multiple fields at once by exploiting how the application handles user input).

Fix: Upgrade Langflow to version 1.0.13 or later.

NVD/CVE Database
09

Protect Your Copilots: Preventing Data Leaks in Copilot Studio

security
Jul 30, 2024

Microsoft's Copilot Studio is a low-code platform that lets employees build chatbots, but it has security risks including data leaks and unauthorized access when Copilots are misconfigured. The post warns that external attackers can find and interact with improperly set-up Copilots, and discusses how to protect organizational data using security controls.

Fix: Enable Data Loss Prevention (DLP, a security feature that prevents sensitive information from being shared), which is currently off by default in Copilot Studio.

Embrace The Red
10

CVE-2024-41120: streamlit-geospatial is a streamlit multipage app for geospatial applications. Prior to commit c4f81d9616d40c60584e36abb

security
Jul 26, 2024

CVE-2024-41120 is a vulnerability in streamlit-geospatial, a web application for geospatial data analysis, where user input to a URL field is not validated before being sent to a file-reading function. This allows attackers to make the server send requests to any destination they choose, a technique called SSRF (server-side request forgery, where an attacker tricks a server into making unwanted requests to other systems). The vulnerability affects code before a specific commit that patches the issue.

Fix: Commit c4f81d9616d40c60584e36abb15300853a66e489 fixes this issue. Users should update to the version containing this commit.

NVD/CVE Database
Prev1...191192193194195...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026