All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
New York's legislature passed the RAISE Act (Responsible AI Safety and Education Act), which would regulate frontier AI systems (the largest, most powerful AI models) if signed into law. The act requires developers of expensive AI models to publish safety plans, withhold unreasonably risky models from release, report safety incidents within 72 hours, and face penalties up to $10 million for violations.
The European Commission is recruiting up to 60 independent experts for a scientific panel to advise on general-purpose AI (GPAI, large AI models designed for many tasks) under the EU AI Act. The panel will assess systemic risks (widespread dangers affecting multiple countries or many users), classify AI models, and issue alerts when AI systems pose significant dangers to Europe. Applicants need a PhD in a relevant field, proven AI research experience, and independence from AI companies, with the deadline set for September 14th.
Cursor, a code editor designed for AI-assisted programming, had a security flaw in versions before 0.51.0 where JSON files could automatically trigger web requests without user approval. An attacker could exploit this, especially after a prompt injection attack (tricking the AI with hidden instructions in its input), to make the AI agent send data to a malicious website.
CVE-2025-32711 is a command injection vulnerability (a weakness where an attacker tricks a program into running unintended commands) in Microsoft 365 Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability has a CVSS severity score of 4.0 (a moderate rating on a 0-10 scale where 10 is most severe). Microsoft has published information about this vulnerability, but the provided source does not contain specific technical details about the attack or its impact.
A deserialization of untrusted data vulnerability (CWE-502, a weakness where an application processes data from an untrusted source without checking it first) was found in The Fashion - Model Agency One Page Beauty Theme for WordPress, affecting versions up to 1.4.4. This vulnerability allows object injection (inserting malicious objects into the application), which could let attackers execute unintended code or actions.
FastGPT is an open-source platform for building AI workflows and chatbots that uses a sandbox (an isolated container designed to safely run untrusted code). Versions before 4.9.11 had weak isolation that allowed attackers to escape the sandbox by using overly permissive syscalls (system calls, which are requests programs make to the operating system), letting them read files, modify files, and bypass security restrictions. The vulnerability is fixed in version 4.9.11 by limiting which system calls are allowed to a safer set.
The mcp-com-server is a tool that connects the Model Context Protocol (MCP, a standard for AI systems to interact with external tools) to COM (Component Object Model, Microsoft's decades-old system for sharing functionality across programs on Windows). This allows an AI like Claude to automate Windows and Office tasks, such as creating Excel files and sending emails, by dynamically discovering and controlling COM objects. The main security risk is that COM can access dangerous operations like file system access, so the server uses an allowlist (a list of approved COM objects that are permitted to run) to restrict which COM objects can be instantiated.
The Hive Support plugin for WordPress has a security flaw in versions up to 1.2.4 where two functions lack capability checks (security checks that verify user permissions). This allows attackers with basic Subscriber-level accounts to read and change the site's OpenAI API key, inspect data, and modify how the AI chatbot behaves.
AstrBot, a chatbot and development framework powered by large language models (LLMs, AI systems trained on large amounts of text data), has a path traversal vulnerability (a flaw that lets attackers access files they shouldn't be able to reach) in versions 3.4.4 through 3.5.12 that could expose sensitive information like API keys (credentials used to access external services) and passwords. The vulnerability was fixed in version 3.5.13.
vLLM (a system for running and serving large language models) versions 0.8.0 through 0.9.0 have a vulnerability where the /v1/chat/completions API endpoint doesn't properly check user input in the 'pattern' and 'type' fields when the tools feature is used, allowing a single malformed request to crash the inference worker (the part that actually runs the model) until someone restarts it.
CVE-2025-48943 is a Denial of Service vulnerability (a type of attack that crashes a system) in vLLM versions 0.8.0 through 0.8.x that causes the server to crash when given an invalid regex (a pattern used to match text). This happens specifically when using the structured output feature, which lets the AI format responses in a specific way.
vLLM (an inference and serving engine for large language models) versions 0.8.0 through 0.8.x have a vulnerability where sending an invalid JSON schema as a parameter to the /v1/completions API endpoint causes the server to crash. This happens because the application doesn't properly handle (catch) exceptions that occur when processing malformed input.
vLLM, a software system that runs and serves large language models, has a vulnerability in how it parses tool commands that can be exploited to crash or slow down the service. The problem comes from using an overly complex pattern-matching rule (regular expression with nested quantifiers, optional groups, and inner repetitions) that can cause the system to get stuck processing certain inputs, leading to severe performance problems.
Gradio is an open-source Python package for building machine learning demos and web applications. Before version 5.31.0, a vulnerability in its flagging feature let unauthenticated attackers copy any readable file from the server's filesystem, which could cause DoS (denial of service, where a system becomes unavailable) by copying massive files to fill up disk space, though attackers couldn't actually read the copied files.
CVE-2025-48491 is a vulnerability in Project AI, a platform for creating AI agents, where a hardcoded API key (a secret credential stored directly in the code rather than kept separate) was exposed in versions before the pre-beta release. This means attackers could potentially find and misuse this key to access the system without proper authorization.
This article compares traditional application security (AppSec) practices with AI security, noting that familiar principles like input validation and authentication apply to both, but AI systems introduce unique risks. New attack types specific to AI, such as prompt injection (tricking an AI by hiding instructions in its input), model poisoning (tampering with training data), and membership inference attacks (determining if specific data was in training), require security engineers to develop new defensive strategies beyond traditional code-level vulnerability management.
Fix: The vulnerability is fixed in version 0.51.0. Users should update to this version or later.
NVD/CVE DatabaseFix: Update to version 4.9.11 or later. According to the source, this version patches the vulnerability by restricting the allowed system calls to a safer subset and adding additional descriptive error messaging.
NVD/CVE DatabaseGenerative AI (AI systems that create new text, code, or images) is a double-edged sword in cybersecurity, helping both defenders and attackers. The case study of a fictional insurance company shows how GenAI can be used to launch cyberattacks (malicious attempts to breach computer systems) and also to defend against them, creating a difficult choice for IT leaders about whether to use AI as a defensive tool or risk falling behind attackers who already have it.
As AI development has grown rapidly, organizations struggle with how to actually put responsible AI practices into action beyond just making promises about it. This article describes how two organizations created a five-phase process to embed responsibility pledges (formal commitments to use AI ethically) into their daily practices using a systems approach (treating responsibility as interconnected parts of the whole organization rather than isolated efforts).
Fix: The source explicitly mentions two mitigations: (1) An Allow List for CLSIDs and ProgIDs, where 'the MCP server will instantiate allow listed COM objects' and notes this 'could be expanded to include specific interfaces/methods as well,' and (2) 'Confirmation Dialogs' where 'Claude shows an Allow / Deny button before invoking custom tools by default' to 'make sure a human remains in the loop,' though the source notes this 'can be disabled, but also re-enabled in the Claude Settings per MCP tool.'
Embrace The RedSkyvern through version 0.1.85 has a vulnerability where attackers can inject malicious code into the Prompt field of workflow blocks through SSTI (server-side template injection, where untrusted input is processed as code by the server's template engine). Authenticated users can craft special expressions in Jinja2 templates (a template system that evaluates code on the server) that aren't properly cleaned up, allowing them to execute commands on the server without direct feedback, a capability known as blind RCE (remote code execution).
Fix: A fix is referenced in the GitHub commit db856cd8433a204c8b45979c70a4da1e119d949d in the Skyvern repository, but the source text does not explicitly describe what the fix does or provide a specific patched version number to upgrade to.
NVD/CVE DatabaseThis content is a collection of blog post titles and announcements from Palo Alto Networks about AI security, covering topics like agentic AI (AI systems that can autonomously take actions), container security, and operational technology (OT, the systems that control physical infrastructure) security. The posts discuss vulnerabilities in autonomous AI systems, the need for contextual red teaming (security testing tailored to specific use cases), and various security products like Prisma AIRS.
Fix: Upgrade to version 3.5.13 or later. As a temporary workaround, users can edit the `cmd_config.json` file to disable the dashboard feature.
NVD/CVE DatabaseFix: Update to version 0.9.0 or later, which fixes the issue.
NVD/CVE DatabaseFix: Upgrade to version 0.9.0, which fixes the issue. A patch is available at https://github.com/vllm-project/vllm/commit/08bf7840780980c7568c573c70a6a8db94fd45ff.
NVD/CVE DatabaseFix: Update to vLLM version 0.9.0 or later, which fixes the issue.
NVD/CVE DatabaseFix: Update to version 0.9.0 or later, which contains a patch for the issue.
NVD/CVE DatabaseFix: Update to Gradio version 5.31.0 or later, where this issue has been patched.
NVD/CVE Database