aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
25
[LAST_7D]
171
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 84/270
VIEW ALL
01

Hacker knackt 600 Firewalls in einem Monat – mit KI

security
Feb 24, 2026

Between January and February 2026, a Russian-speaking hacker compromised over 600 Fortigate firewalls (network security devices that filter traffic) by first targeting ones with weak passwords, then using an AI tool based on Google Gemini to access other devices on the same networks. Security researchers at AWS found that the attacker's reconnaissance tools (software used to gather information about a system) were written in Go and Python and showed signs of AI-generated code, suggesting threat actors are increasingly using AI to automate and scale their attacks.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Fix: According to AWS security experts, the best protection against such attacks is to use strong passwords and enable Multi-Factor Authentication (MFA, a security method requiring multiple verification steps to prove identity). The report notes that the attacker repeatedly failed when attempting to compromise patched or hardened systems (computers updated with security fixes and configured defensively), so he targeted easier victims instead.

CSO Online
02

So verändert KI Ihre GRC-Strategie

policysecurity
Feb 24, 2026

As companies adopt generative and agentic AI (AI systems that can take actions autonomously), they need to update their GRC (Governance, Risk & Compliance, the framework for managing rules, risks, and regulatory requirements) programs to account for AI-related risks. According to a 2025 security report, about 1 in 80 requests from company devices to AI services poses a high risk of exposing sensitive data, yet only 24% of companies have implemented comprehensive AI-GRC policies.

Fix: The source text recommends several explicit approaches: (1) Foster broad organizational acceptance of risk management across the company by promoting cooperation so all employees understand they must work together; (2) Develop both strategic and tactical approaches to define different types of AI tools, assess their relative risks, and weigh their potential benefits; (3) Use tactical measures including Secure-by-Design approaches (building security into AI tools from the start), initiatives to detect shadow AI (unauthorized AI use), and risk-based AI inventory and classification to focus resources on highest-impact risks without creating burdensome processes; (4) Make risks of specific AI measures transparent to business leadership rather than simply approving or rejecting AI use.

CSO Online
03

CVE-2026-27609: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

security
Feb 24, 2026

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a CSRF vulnerability (cross-site request forgery, where an attacker tricks a logged-in user into unknowingly sending requests to a website). An attacker can create a malicious webpage that, when visited by someone authenticated to Parse Dashboard, forces their browser to send unwanted requests to the AI Agent API endpoint without their knowledge. This vulnerability is fixed in version 9.0.0-alpha.8 and later.

Fix: Update to version 9.0.0-alpha.8 or later, which adds CSRF middleware (code that checks requests are legitimate) to the agent endpoint and embeds a CSRF token (a secret code) in the dashboard page. Alternatively, remove the `agent` configuration block from your dashboard configuration file as a temporary workaround.

NVD/CVE Database
04

CVE-2026-27608: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

security
Feb 24, 2026

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a security flaw in the AI Agent API endpoint (a feature for managing Parse Server apps) where authorization checks are missing, allowing authenticated users to access other apps' data and read-only users to perform write and delete operations they shouldn't be allowed to do. Only dashboards with the agent feature enabled are vulnerable to this issue.

Fix: Update to version 9.0.0-alpha.8 or later, which adds authorization checks and restricts read-only users to a limited key with write permissions removed server-side (the server prevents writes even if requested). As a temporary workaround, remove the `agent` configuration block from your dashboard configuration file.

NVD/CVE Database
05

CVE-2026-27595: Parse Dashboard is a standalone dashboard for managing Parse Server apps. In versions 7.3.0-alpha.42 through 9.0.0-alpha

security
Feb 24, 2026

Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have security vulnerabilities in the AI Agent API endpoint that allow unauthenticated attackers to read and write data from any connected database using the master key (a special admin credential that grants full access). The agent feature must be enabled to be vulnerable, so dashboards without it are safe.

Fix: Upgrade to version 9.0.0-alpha.8 or later, which adds authentication, CSRF validation (protection against forged requests), and per-app authorization middleware to the agent endpoint. Alternatively, remove or comment out the agent configuration block from your Parse Dashboard configuration file as a temporary workaround.

NVD/CVE Database
06

India’s AI boom pushes firms to trade near-term revenue for users

industry
Feb 24, 2026

India has become the world's largest market for generative AI (artificial intelligence systems that can create text, images, and other content) app downloads in 2025, with installs jumping 207% year-over-year, but major AI companies like OpenAI and Google are now ending free promotional offers to convert users into paying subscribers. Despite India driving roughly 20% of global GenAI app downloads, it accounts for only about 1% of in-app purchases, and revenue has actually declined in recent months as companies rolled out cheaper or free options like ChatGPT Go. The challenge reflects a tension between rapid user growth and actual monetization (converting users into paying customers) in a price-sensitive market.

TechCrunch
07

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

policysafety
Feb 24, 2026

The U.S. Department of Defense is pressuring Anthropic, an AI company, to allow their technology to be used for surveillance and autonomous weapons systems (weapons that operate without human control) by threatening to label them a 'supply chain risk' that would prevent other defense contractors from using their AI. Anthropic has publicly stated these are 'bright red lines' they will not cross, and the article argues they should maintain this position rather than give in to government pressure.

EFF Deeplinks Blog
08

Spanish ‘soonicorn’ Multiverse Computing releases free compressed AI model

industry
Feb 24, 2026

Multiverse Computing, a Spanish startup, has released a free compressed AI model called HyperNova 60B 2602 that reduces the size of large language models (AI systems trained on massive amounts of text) to make them cheaper and faster to use. The company uses CompactifAI, a compression technology inspired by quantum computing (using principles from quantum mechanics to process information), to create models that are roughly half the size of the original while maintaining similar performance and accuracy. The model is now available for free on Hugging Face (a platform where developers share AI models) and includes improved support for tool calling and agentic coding (where AI systems can use external tools or plan sequences of actions).

TechCrunch
09

OpenAI defeats xAI’s trade secrets lawsuit

policy
Feb 24, 2026

OpenAI won a legal case against xAI, which had sued claiming that OpenAI stole its trade secrets (confidential information that gives a company a competitive advantage) and hired away its employees. The judge ruled that xAI failed to prove OpenAI actually did anything wrong, noting that while eight former xAI employees did move to OpenAI, there was no evidence that OpenAI directed them to steal anything.

The Verge (AI)
10

US threatens Anthropic with deadline in dispute on AI safeguards

policysafety
Feb 24, 2026

The US Pentagon is threatening to remove AI company Anthropic from its supply chain and invoke the Defense Production Act (a law allowing the government to compel companies to produce goods for national security) unless Anthropic allows unrestricted use of its Claude AI chatbot for military applications by Friday evening. Anthropic has refused to allow its technology for certain uses, including autonomous kinetic operations (AI making final targeting decisions without human input) and mass domestic surveillance, citing safety concerns.

BBC Technology
Prev1...8283848586...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026