aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4488 items

CVE-2026-41269: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, the Chatflow co

highvulnerability
security
Apr 23, 2026
CVE-2026-41269

Flowise, a tool with a drag-and-drop interface for building customized AI workflows, had a vulnerability before version 3.1.0 where attackers could upload malicious JavaScript files by changing file type settings, even though the user interface normally blocks such uploads. These uploaded files could act as web shells (programs that give attackers control over the server), potentially allowing remote code execution (RCE, where an attacker runs commands on a system they don't own).

Fix: Update Flowise to version 3.1.0 or later, where this vulnerability is fixed.

NVD/CVE Database

CVE-2026-41268: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, Flowise is vuln

criticalvulnerability
security
Apr 23, 2026
CVE-2026-41268

Flowise, a tool that lets users visually design custom AI workflows, has a critical vulnerability in versions before 3.1.0 that allows attackers to run any system commands they want without logging in. An attacker can exploit this by using a special keyword (FILE-STORAGE::) and injecting code into an environment variable (NODE_OPTIONS) through a single web request, gaining full control of the Flowise system.

CVE-2026-41267: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, an improper mas

highvulnerability
security
Apr 23, 2026
CVE-2026-41267

Flowise, a tool for building customized AI workflows through a drag-and-drop interface, had a security flaw in versions before 3.1.0 where attackers could inject malicious data during account registration. This JSON injection (inserting unauthorized code into data fields) vulnerability allowed unauthenticated users to manipulate important metadata like ownership and user roles, potentially breaking security boundaries in systems that host multiple separate organizations.

CVE-2026-41266: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, /api/v1/public-

highvulnerability
security
Apr 23, 2026
CVE-2026-41266

Flowise, a tool for building customized LLM (large language model) flows through a visual drag-and-drop interface, has a vulnerability in versions before 3.1.0 where an API endpoint exposes sensitive data like API keys and authorization headers without requiring authentication. An attacker who knows only a chatflow UUID (a unique identifier) can steal credentials and other sensitive information from the system.

CVE-2026-41265: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, the specific fl

criticalvulnerability
security
Apr 23, 2026
CVE-2026-41265

Flowise is a tool with a visual interface for building customized AI workflows. Before version 3.1.0, the Airtable_Agents component had a security flaw where it ran Python code generated by an AI without proper sandboxing (isolation to prevent unauthorized access). An attacker could use prompt injection (tricking the AI by hiding instructions in user input) to make the AI generate malicious code that runs on the Flowise server.

CVE-2026-41138: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, there is a remo

highvulnerability
security
Apr 23, 2026
CVE-2026-41138

Flowise is a tool with a drag-and-drop interface for building customized large language model flows. Before version 3.1.0, it had a remote code execution vulnerability (RCE, where an attacker can run commands on a system they don't own) in AirtableAgent.ts because user input was directly inserted into Python code without sanitization (cleaning to remove harmful content), allowing attackers to inject malicious code through the question parameter.

CVE-2026-41137: Flowise is a drag & drop user interface to build a customized large language model flow. Prior to 3.1.0, The CSVAgent al

criticalvulnerability
security
Apr 23, 2026
CVE-2026-41137

Flowise is a drag-and-drop interface for building customized large language model workflows. Versions before 3.1.0 have a command injection vulnerability (code injection, where attackers can execute arbitrary commands) in the CSVAgent feature because it fails to properly filter user-provided Pandas CSV reading code, allowing attackers to run malicious commands on the server.

A pelican for GPT-5.5 via the semi-official Codex backdoor API

infonews
security
Apr 23, 2026

GPT-5.5 is a new AI model from OpenAI that is now available through Codex (a code-focused AI tool) and ChatGPT subscriptions, though the standard API is not yet available. The author created a tool called llm-openai-via-codex that lets users access GPT-5.5 through their existing Codex subscription by reverse-engineering how authentication tokens work, rather than waiting for the official API release.

llm-openai-via-codex 0.1a0

infonews
industry
Apr 23, 2026

This is a brief announcement about llm-openai-via-codex version 0.1a0, a tool that connects OpenAI's services with the llm command-line interface. The post appears to be from Simon Willison's monthly briefing on LLM developments from April 2026.

Anthropic’s Mythos breach was humiliating

highnews
securitysafety

OpenAI announces GPT-5.5, its latest artificial intelligence model

infonews
industry
Apr 23, 2026

OpenAI released GPT-5.5, a new AI model that performs better at coding, using computers, and research with less guidance from users. The model meets OpenAI's "High" cybersecurity risk classification, meaning it could amplify existing pathways to harm, though it does not reach the "Critical" threshold. The company conducted third-party testing and red teaming (adversarial testing where security experts try to break the system) and iterated on cyber safeguards for months before release.

Enabling trust and learner agency in lifelong learning: A dual-chain, privacy-preserving credential architecture

inforesearchPeer-Reviewed
security

OpenAI says its new GPT-5.5 model is more efficient and better at coding

infonews
industry
Apr 23, 2026

OpenAI released GPT-5.5, a new AI model designed to be more efficient and better at coding tasks than its predecessor GPT-5.4. The model can handle complex, multi-step tasks by planning its own approach, using available tools, and checking its own work without requiring users to carefully direct every action.

The Guardian view on Anthropic’s Claude Mythos: when AI finds every flaw, who controls the internet? | Editorial

infonews
securitysafety

GHSA-c57f-mm3j-27q9: Astro: Cache Poisoning due to incorrect error handling when if-match header is malformed

mediumvulnerability
security
Apr 23, 2026
CVE-2026-41322

Astro versions 5.14.1 and Node 9.4.4 have a cache poisoning vulnerability where sending a malformed `if-match` header (a request validation header) to static JavaScript or CSS files causes the server to return a 500 error with a one-year cache duration instead of the correct 412 error with no cache headers. This means all future requests to that file get cached error responses, breaking the application until the cache expires.

GHSA-pfm2-2mhg-8wpx: n8n-MCP Logs Sensitive Request Data on Unauthorized /mcp Requests

mediumvulnerability
security
Apr 23, 2026
CVE-2026-41495

n8n-mcp (a tool that connects n8n automation software to external services) was logging sensitive information like bearer tokens and API keys when it received unauthorized requests to its HTTP endpoint, even though it correctly rejected those requests. This happened because the logs captured request metadata before checking authentication, which could expose secrets if logs were shared or stored outside secure boundaries.

Bad Memories Still Haunt AI Agents

mediumnews
security
Apr 23, 2026

Cisco discovered a serious vulnerability in how Anthropic (an AI company) stores and manages memories, which are pieces of information that AI systems keep between conversations. While Anthropic fixed this particular issue, security experts warn that poorly managed memory files remain a widespread risk to AI systems.

THE PEOPLE DO NOT YEARN FOR AUTOMATION

infonews
policyindustry

You’re about to feel the AI money squeeze

infonews
industry
Apr 23, 2026

Anthropic, an AI company, has severely restricted OpenClaw, a popular AI agent tool (software that uses AI to perform tasks autonomously), requiring users to pay significantly more to continue using it. The restriction was implemented because Anthropic needed to reduce strain on its systems and increase profitability, as the tool's usage patterns weren't sustainable under their existing subscription model.

R-FLoRA: Residual-Statistic-Gated Low-Rank Adaptation for Single-Image Face Morphing Attack Detection

inforesearchPeer-Reviewed
research
Previous20 / 225Next

Fix: Upgrade Flowise to version 3.1.0 or later, where this vulnerability is fixed.

NVD/CVE Database

Fix: Update to Flowise version 3.1.0 or later, where the vulnerability is fixed.

NVD/CVE Database

Fix: Update to Flowise version 3.1.0, where this vulnerability is fixed.

NVD/CVE Database

Fix: Update to version 3.1.0 or later.

NVD/CVE Database

Fix: Update Flowise to version 3.1.0 or later, where this vulnerability is fixed.

NVD/CVE Database

Fix: Update to Flowise version 3.1.0 or later, where this vulnerability is fixed.

NVD/CVE Database
Simon Willison's Weblog
Simon Willison's Weblog
Apr 23, 2026

Anthropic's Claude Mythos model, which the company claimed was too dangerous to release publicly due to its advanced cybersecurity capabilities, was accessed by unauthorized users since the day the company announced it would share the model with selected companies for testing. The breach undermines Anthropic's reputation as a company focused on AI safety.

The Verge (AI)
CNBC Technology
privacy
Apr 23, 2026

This academic paper proposes a dual-chain, privacy-preserving credential architecture designed to enable trust and learner agency in lifelong learning systems. The work focuses on creating secure credential management that protects learner privacy while maintaining verifiable educational records across multiple institutions and learning contexts.

Elsevier Security Journals
The Verge (AI)
Apr 23, 2026

Anthropic created Claude Mythos, an AI model that can autonomously find and exploit zero-day vulnerabilities (previously unknown security flaws that hackers don't yet know about), write code to exploit them, and potentially take over major operating systems and web browsers, but the company chose not to release it publicly due to these risks. To address the threat, Anthropic launched Project Glasswing, partnering with 40 organizations to help them "patch" (fix) vulnerabilities before attackers can exploit them, though all current partners are American companies.

Fix: Anthropic has named 40 organisations as partners under Project Glasswing to help mount a defence by asking them to "patch" vulnerabilities before hackers get a chance to exploit them.

The Guardian Technology
GitHub Advisory Database

Fix: Upgrade to n8n-mcp v2.47.11 or later using 'npx n8n-mcp@latest' for npm or 'docker pull ghcr.io/czlonkowski/n8n-mcp:latest' for Docker. If immediate upgrade is not possible, restrict network access to the HTTP port using a firewall or reverse proxy, or switch to stdio transport mode by setting MCP_MODE=stdio.

GitHub Advisory Database

Fix: Anthropic fixed the vulnerability that Cisco found. The source does not provide additional details about the specific fix, version numbers, or other mitigation steps.

Dark Reading
Apr 23, 2026

This article discusses 'software brain,' a way of thinking that sees everything through algorithms and automation, which has been amplified by AI development. Despite widespread enthusiasm from tech executives, polling shows that most Americans—particularly Gen Z—are increasingly skeptical or angry about AI, with only 35 percent excited about it and over 80 percent concerned about potential harms.

The Verge (AI)
The Verge (AI)
security
Apr 23, 2026

Face morphing attacks (blending two faces together to fool facial recognition systems) threaten security systems used at borders and for digital identity checks, and detecting them from a single image is difficult because there's no trusted reference image to compare against. This paper presents R-FLoRA, a new detection method that combines high-frequency image analysis (looking at fine details) with a frozen, large-scale vision transformer (a type of AI model trained on images) to spot morphing artifacts while keeping the overall understanding of the face intact. The method outperforms nine other detection approaches on multiple test datasets and works efficiently in real-world biometric verification systems.

IEEE Xplore (Security & AI Journals)