aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3313 items

CVE-2023-46302: Apache Software Foundation Apache Submarine has a bug when serializing against yaml. The bug is caused by snakeyaml htt

criticalvulnerability
security
Nov 20, 2023
CVE-2023-46302

Apache Submarine has a security vulnerability in how it handles YAML (a data format language) requests because it uses an unsafe library called snakeyaml. When users send YAML data to the application through its REST API (a system for receiving web requests), the unsafe handling could allow attackers to execute malicious code.

Fix: Users should upgrade to Apache Submarine version 0.8.0, which fixes this issue by replacing snakeyaml with jackson-dataformat-yaml. If upgrading is not possible, users can cherry-pick (apply a specific code fix from) PR https://github.com/apache/submarine/pull/1054 and rebuild the submarine-server image.

NVD/CVE Database

CVE-2023-6020: LFI in Ray's /static/ directory allows attackers to read any file on the server without authentication.

highvulnerability
security
Nov 16, 2023
CVE-2023-6020EPSS: 81.4%

CVE-2023-6020 is a local file inclusion (LFI, a vulnerability that lets attackers read files they shouldn't access) in Ray's /static/ directory that allows attackers to read any file on the server without needing to log in. The vulnerability stems from missing authorization checks (the system doesn't verify whether a user should have access before serving files).

CVE-2023-6014: An attacker is able to arbitrarily create an account in MLflow bypassing any authentication requirment.

criticalvulnerability
security
Nov 16, 2023
CVE-2023-6014

CVE-2023-6014 is a vulnerability in MLflow (a machine learning experiment tracking platform) that allows attackers to create user accounts without proper authentication (the process of verifying someone's identity). The vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 4.0, indicating moderate severity.

CVE-2023-6021: LFI in Ray's log API endpoint allows attackers to read any file on the server without authentication. The issue is fixed

highvulnerability
security
Nov 16, 2023
CVE-2023-6021EPSS: 87.3%

CVE-2023-6018: An attacker can overwrite any file on the server hosting MLflow without any authentication.

criticalvulnerability
security
Nov 16, 2023
CVE-2023-6018EPSS: 91.3%

CVE-2023-6018 is a vulnerability in MLflow (an open-source machine learning platform) that allows an attacker to overwrite any file on the server without needing to log in or authenticate. The vulnerability is caused by OS command injection (a flaw where special characters in user input are not properly filtered before being executed as system commands), which gives attackers the ability to run unauthorized commands on the server.

CVE-2023-6015: MLflow allowed arbitrary files to be PUT onto the server.

highvulnerability
security
Nov 16, 2023
CVE-2023-6015

CVE-2023-6015 is a vulnerability in MLflow that allows attackers to upload arbitrary files to the server using PUT requests. This is a path traversal vulnerability (CWE-22, where an attacker can write files outside the intended directory by manipulating file paths), with a CVSS severity score of 4.0 (a moderate-level security issue on a 0-10 scale).

CVE-2023-5245: FileUtil.extract() enumerates all zip file entries and extracts each file without validating whether file paths in the a

highvulnerability
security
Nov 15, 2023
CVE-2023-5245

CVE-2023-5245 is a vulnerability in FileUtil.extract() where zip file extraction does not check if file paths are outside the intended directory, allowing attackers to create files anywhere and potentially execute code when TensorflowModel processes a saved model. This is called path traversal (a technique where an attacker uses file paths like '../../../' to escape a restricted folder).

Hacking Google Bard - From Prompt Injection to Data Exfiltration

mediumnews
securitysafety

CVE-2023-46315: The zanllp sd-webui-infinite-image-browsing (aka Infinite Image Browsing) extension before 977815a for stable-diffusion-

highvulnerability
security
Oct 22, 2023
CVE-2023-46315

The Infinite Image Browsing extension for Stable Diffusion web UI (a tool for generating images with AI) has a security flaw that allows attackers to read any file on a computer if Gradio authentication is enabled without a secret key configuration. Attackers can exploit this by manipulating URLs with /file?path= to access sensitive files, such as environment variables that might contain login credentials.

CVE-2023-32786: In Langchain through 0.0.155, prompt injection allows an attacker to force the service to retrieve data from an arbitrar

highvulnerability
security
Oct 20, 2023
CVE-2023-32786

CVE-2023-32786 is a prompt injection vulnerability (tricking an AI by hiding instructions in its input) in Langchain version 0.0.155 and earlier that allows attackers to force the service to retrieve data from any URL they choose. This could lead to SSRF (server-side request forgery, where an attacker makes a server request data from unintended locations) and potentially inject harmful content into tasks that use the retrieved data.

Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio

mediumnews
security
Oct 19, 2023

Google Cloud's Vertex AI Generative AI Studio had a data exfiltration vulnerability caused by image markdown injection (a technique where attackers embed hidden commands in image references to steal data). The vulnerability was responsibly disclosed to Google and has been fixed.

CVE-2023-46229: LangChain before 0.0.317 allows SSRF via document_loaders/recursive_url_loader.py because crawling can proceed from an e

highvulnerability
security
Oct 19, 2023
CVE-2023-46229

LangChain versions before 0.0.317 have a vulnerability called SSRF (server-side request forgery, where an attacker tricks the application into making requests to unintended servers) in its recursive URL loader component. The flaw allows web crawling to move from an external server to an internal server that should not be accessible.

CVE-2023-45063: Cross-Site Request Forgery (CSRF) vulnerability in ReCorp AI Content Writing Assistant (Content Writer, GPT 3 & 4, ChatG

mediumvulnerability
security
Oct 12, 2023
CVE-2023-45063

A CSRF vulnerability (cross-site request forgery, where an attacker tricks a user into performing unwanted actions on a website they're logged into) was found in the ReCorp AI Content Writing Assistant plugin for WordPress in versions 1.1.5 and earlier. This flaw could allow attackers to exploit users of the plugin without their knowledge.

CVE-2023-44467: langchain_experimental (aka LangChain Experimental) in LangChain before 0.0.306 allows an attacker to bypass the CVE-202

criticalvulnerability
security
Oct 9, 2023
CVE-2023-44467

CVE-2023-44467 is a vulnerability in LangChain Experimental (a library for building AI applications) before version 0.0.306 that allows attackers to bypass a previous security fix and run arbitrary code (unauthorized commands) on a system using the __import__ function in Python, which the pal_chain/base.py file failed to block.

Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground

mediumnews
security
Sep 29, 2023

LLM applications like chatbots are vulnerable to data exfiltration (unauthorized data theft) through image markdown injection, a technique where attackers embed hidden instructions in untrusted data to make the AI generate image tags that leak information. Microsoft patched this vulnerability in Azure AI Playground, though the source does not describe the specific technical details of their fix.

CVE-2023-43654: TorchServe is a tool for serving and scaling PyTorch models in production. TorchServe default configuration lacks proper

criticalvulnerability
security
Sep 28, 2023
CVE-2023-43654EPSS: 91.6%

TorchServe (a tool for running PyTorch machine learning models as web services) has a vulnerability in its default configuration that fails to validate user inputs properly, allowing attackers to download files from any URL and save them to the server's disk. This could let attackers damage the system or steal sensitive information, affecting versions 0.1.0 through 0.8.1.

Advanced Data Exfiltration Techniques with ChatGPT

mediumnews
security
Sep 28, 2023

An indirect prompt injection attack (tricking an AI into following hidden instructions in its input) can allow an attacker to steal chat data from ChatGPT users by either having the AI embed information into image URLs (image markdown injection, which embeds data into web links displayed as images) or convincing users to click malicious links. ChatGPT Plugins, which are add-ons that extend ChatGPT's functionality, create additional exfiltration risks because they have minimal security review before being deployed.

HITCON CMT 2023 - LLM Security Presentation and Trip Report

infonews
securityresearch

LLM Apps: Don't Get Stuck in an Infinite Loop! 💵💰

mediumnews
securitysafety

CVE-2023-41626: Gradio v3.27.0 was discovered to contain an arbitrary file upload vulnerability via the /upload interface.

mediumvulnerability
security
Sep 15, 2023
CVE-2023-41626

Gradio version 3.27.0 has a security flaw that allows attackers to upload any type of file through the /upload interface without proper restrictions (CWE-434, unrestricted file upload with dangerous type). This means someone could potentially upload malicious files to a system running this vulnerable version.

Previous123 / 166Next
NVD/CVE Database
NVD/CVE Database

CVE-2023-6021 is a local file inclusion (LFI, a vulnerability where an attacker can read files from a server by manipulating file paths) in Ray's log API endpoint that allows attackers to read any file on the server without needing authentication. The vulnerability affects Ray versions before 2.8.1.

Fix: The issue is fixed in version 2.8.1+. Users should upgrade to Ray version 2.8.1 or later.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
Nov 3, 2023

Google Bard's new Extensions feature allows it to access personal data like YouTube videos, Google Drive files, Gmail, and Google Docs. Because Bard analyzes this untrusted data, it is vulnerable to indirect prompt injection (a technique where hidden instructions in documents trick an AI into performing unintended actions), which a researcher demonstrated by getting Bard to summarize videos and documents.

Embrace The Red

Fix: Update to commit 977815a or later. The patch is available at https://github.com/zanllp/sd-webui-infinite-image-browsing/pull/368/commits/977815a2b28ad953c10ef0114c365f698c4b8f19

NVD/CVE Database
NVD/CVE Database
Embrace The Red

Fix: Update LangChain to version 0.0.317 or later. Patches are available at https://github.com/langchain-ai/langchain/commit/9ecb7240a480720ec9d739b3877a52f76098a2b8 and https://github.com/langchain-ai/langchain/pull/11925.

NVD/CVE Database
NVD/CVE Database

Fix: Upgrade LangChain to version 0.0.306 or later. A patch is available at https://github.com/langchain-ai/langchain/commit/4c97a10bd0d9385cfee234a63b5bd826a295e483.

NVD/CVE Database
Embrace The Red

Fix: Upgrade to TorchServe release 0.8.2 or later, which includes a warning when the default value for allowed_urls is used. Users should also configure the allowed_urls setting and specify which model URLs are permitted.

NVD/CVE Database
Embrace The Red
Sep 18, 2023

This article is a trip report from HITCON CMT 2023, a security conference in Taiwan, where the author attended talks on various topics including LLM security, reverse engineering with AI, and application exploits. Key presentations covered indirect prompt injections (attacks where malicious instructions are hidden in data fed to an AI system), Electron app vulnerabilities, and PHP security issues. The author gave a talk on indirect prompt injections and notes this technique could become a significant attack vector for AI-integrated applications like chatbots.

Embrace The Red
Sep 16, 2023

An attacker can use indirect prompt injection (tricking an AI by hiding malicious instructions in data it reads) to make an LLM call its own tools or plugins repeatedly in a loop, potentially increasing costs or disrupting service. While ChatGPT users are mostly protected by subscription pricing, call limits, and a manual stop button, this technique demonstrates a real vulnerability in how LLM applications handle recursive tool calls.

Embrace The Red
NVD/CVE Database