aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 307/371
VIEW ALL
01

CVE-2024-2221: qdrant/qdrant is vulnerable to a path traversal and arbitrary file upload vulnerability via the `/collections/{COLLECTIO

security
Apr 10, 2024

Qdrant (a vector database software) has a vulnerability in its snapshot upload endpoint that allows attackers to upload files to any location on the server's filesystem through path traversal (using special file path sequences to access directories they shouldn't). This could let attackers execute arbitrary code on the server and damage the system's integrity and availability.

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: A patch is available at https://github.com/qdrant/qdrant/commit/e6411907f0ecf3c2f8ba44ab704b9e4597d9705d

NVD/CVE Database
02

CVE-2024-1728: gradio-app/gradio is vulnerable to a local file inclusion vulnerability due to improper validation of user-supplied inpu

security
Apr 10, 2024

Gradio (a framework for building AI interfaces) has a vulnerability in its UploadButton component where it doesn't properly validate (check) user input, allowing attackers to read any file on the server by manipulating file paths sent to the `/queue/join` endpoint. This could let attackers steal sensitive files like SSH keys (credentials used for secure server access) and potentially execute arbitrary code on the system.

NVD/CVE Database
03

CVE-2024-3098: A vulnerability was identified in the `exec_utils` class of the `llama_index` package, specifically within the `safe_eva

security
Apr 10, 2024

A vulnerability was found in the `safe_eval` function of the `llama_index` package that allows prompt injection (tricking an AI by hiding instructions in its input) to execute arbitrary code (running code an attacker chooses). The flaw exists because the input validation is insufficient, meaning the package doesn't properly check what data is being passed in, allowing attackers to bypass safety restrictions that were meant to prevent this type of attack.

NVD/CVE Database
04

CVE-2024-28224: Ollama before 0.1.29 has a DNS rebinding vulnerability that can inadvertently allow remote access to the full API, there

security
Apr 8, 2024

Ollama before version 0.1.29 has a DNS rebinding vulnerability (a technique where an attacker tricks a system into connecting to a malicious server by manipulating how domain names are translated into addresses), which allows unauthorized remote access to its full API. This vulnerability could let an attacker interact with the language model, remove models, or cause a denial of service (making a system unavailable by overloading it with requests).

Fix: Update Ollama to version 0.1.29 or later.

NVD/CVE Database
05

CVE-2024-31224: GPT Academic provides interactive interfaces for large language models. A vulnerability was found in gpt_academic versio

security
Apr 8, 2024

GPT Academic is a tool that provides interactive interfaces for large language models. Versions 3.64 through 3.73 have a vulnerability where the server deserializes untrusted data (processes data from users without verifying it's safe), which could allow attackers to execute code remotely on any exposed server. Any device running these vulnerable versions and accessible over the internet is at risk.

Fix: Upgrade to version 3.74, which contains a patch for the issue. The source states: 'There are no known workarounds aside from upgrading to a patched version.'

NVD/CVE Database
06

Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix

security
Apr 7, 2024

Google AI Studio had a vulnerability that allowed attackers to steal data through prompt injection (tricking an AI by hiding malicious instructions in its input), where a malicious file could trick the AI into exfiltrating other uploaded files to an attacker's server via image tags. The vulnerability appeared in a recent update but was fixed within 12 days of being reported to Google on February 17, 2024.

Fix: The issue was fixed by Google and did not reproduce after the company heard back about the report 12 days later (by approximately February 29, 2024). The ticket was closed as 'Duplicate' on March 3, 2024, suggesting the vulnerability may have also been caught through internal testing.

Embrace The Red
07

The dangers of AI agents unfurling hyperlinks and what to do about it

securitysafety
Apr 3, 2024

Unfurling is when an application automatically expands hyperlinks to show previews, which can be exploited in AI chatbots to leak data. When an attacker uses prompt injection (tricking an AI by hiding instructions in its input) to make the chatbot generate a link containing sensitive information from earlier conversations, the unfurling feature automatically sends that data to a third-party server, potentially exposing private information.

Fix: To disable unfurling in Slack Apps, modify the message creation function to include unfurl settings in the JSON object: set "unfurl_links": False and "unfurl_media": False when creating the message, as shown in the example code: def create_message(text): message = { "text": text, "unfurl_links": False, "unfurl_media": False } return json.dumps(message)

Embrace The Red
08

CVE-2024-3078: A vulnerability was found in Qdrant up to 1.6.1/1.7.4/1.8.2 and classified as critical. This issue affects some unknown

security
Mar 29, 2024

A critical vulnerability was discovered in Qdrant (a vector database system) versions up to 1.6.1, 1.7.4, and 1.8.2 that allows path traversal (a technique where attackers access files outside intended directories) through the Full Snapshot REST API (a web interface for creating system backups). This flaw could let attackers manipulate file paths to access unauthorized files on the system.

Fix: Upgrade to Qdrant version 1.8.3 or later. The specific patch is identified as 3ab5172e9c8f14fa1f7b24e7147eac74e2412b62.

NVD/CVE Database
09

CVE-2024-1729: A timing attack vulnerability exists in the gradio-app/gradio repository, specifically within the login function in rout

security
Mar 29, 2024

CVE-2024-1729 is a timing attack vulnerability (where an attacker guesses a password by measuring how long the system takes to reject it) in the Gradio application's login function. The vulnerability exists because the code directly compares the entered password with the stored password using a simple equality check, which can leak information through response time differences, potentially allowing attackers to bypass authentication and gain unauthorized access.

Fix: A patch is available at https://github.com/gradio-app/gradio/commit/e329f1fd38935213fe0e73962e8cbd5d3af6e87b. Additionally, a bounty reference with more details is provided at https://huntr.com/bounties/f6a10a8d-f538-4cb7-9bb2-85d9f5708124.

NVD/CVE Database
10

CVE-2024-29100: Unrestricted Upload of File with Dangerous Type vulnerability in Jordy Meow AI Engine: ChatGPT Chatbot.This issue affect

security
Mar 28, 2024

CVE-2024-29100 is an unrestricted file upload vulnerability (a security flaw that allows attackers to upload harmful files without proper checks) in the Jordy Meow AI Engine: ChatGPT Chatbot plugin for WordPress, affecting versions up to 2.1.4. This vulnerability could potentially allow attackers to upload dangerous files to a website using this plugin.

NVD/CVE Database
Prev1...305306307308309...371Next