All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Flowise, a tool with a drag-and-drop interface for building customized AI workflows, had a vulnerability before version 3.1.0 where attackers could upload malicious JavaScript files by changing file type settings, even though the user interface normally blocks such uploads. These uploaded files could act as web shells (programs that give attackers control over the server), potentially allowing remote code execution (RCE, where an attacker runs commands on a system they don't own).
Fix: Update Flowise to version 3.1.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFlowise, a tool that lets users visually design custom AI workflows, has a critical vulnerability in versions before 3.1.0 that allows attackers to run any system commands they want without logging in. An attacker can exploit this by using a special keyword (FILE-STORAGE::) and injecting code into an environment variable (NODE_OPTIONS) through a single web request, gaining full control of the Flowise system.
Flowise, a tool for building customized AI workflows through a drag-and-drop interface, had a security flaw in versions before 3.1.0 where attackers could inject malicious data during account registration. This JSON injection (inserting unauthorized code into data fields) vulnerability allowed unauthenticated users to manipulate important metadata like ownership and user roles, potentially breaking security boundaries in systems that host multiple separate organizations.
Flowise, a tool for building customized LLM (large language model) flows through a visual drag-and-drop interface, has a vulnerability in versions before 3.1.0 where an API endpoint exposes sensitive data like API keys and authorization headers without requiring authentication. An attacker who knows only a chatflow UUID (a unique identifier) can steal credentials and other sensitive information from the system.
Flowise is a tool with a visual interface for building customized AI workflows. Before version 3.1.0, the Airtable_Agents component had a security flaw where it ran Python code generated by an AI without proper sandboxing (isolation to prevent unauthorized access). An attacker could use prompt injection (tricking the AI by hiding instructions in user input) to make the AI generate malicious code that runs on the Flowise server.
Flowise is a tool with a drag-and-drop interface for building customized large language model flows. Before version 3.1.0, it had a remote code execution vulnerability (RCE, where an attacker can run commands on a system they don't own) in AirtableAgent.ts because user input was directly inserted into Python code without sanitization (cleaning to remove harmful content), allowing attackers to inject malicious code through the question parameter.
Flowise is a drag-and-drop interface for building customized large language model workflows. Versions before 3.1.0 have a command injection vulnerability (code injection, where attackers can execute arbitrary commands) in the CSVAgent feature because it fails to properly filter user-provided Pandas CSV reading code, allowing attackers to run malicious commands on the server.
GPT-5.5 is a new AI model from OpenAI that is now available through Codex (a code-focused AI tool) and ChatGPT subscriptions, though the standard API is not yet available. The author created a tool called llm-openai-via-codex that lets users access GPT-5.5 through their existing Codex subscription by reverse-engineering how authentication tokens work, rather than waiting for the official API release.
This is a brief announcement about llm-openai-via-codex version 0.1a0, a tool that connects OpenAI's services with the llm command-line interface. The post appears to be from Simon Willison's monthly briefing on LLM developments from April 2026.
OpenAI released GPT-5.5, a new AI model that performs better at coding, using computers, and research with less guidance from users. The model meets OpenAI's "High" cybersecurity risk classification, meaning it could amplify existing pathways to harm, though it does not reach the "Critical" threshold. The company conducted third-party testing and red teaming (adversarial testing where security experts try to break the system) and iterated on cyber safeguards for months before release.
OpenAI released GPT-5.5, a new AI model designed to be more efficient and better at coding tasks than its predecessor GPT-5.4. The model can handle complex, multi-step tasks by planning its own approach, using available tools, and checking its own work without requiring users to carefully direct every action.
Astro versions 5.14.1 and Node 9.4.4 have a cache poisoning vulnerability where sending a malformed `if-match` header (a request validation header) to static JavaScript or CSS files causes the server to return a 500 error with a one-year cache duration instead of the correct 412 error with no cache headers. This means all future requests to that file get cached error responses, breaking the application until the cache expires.
n8n-mcp (a tool that connects n8n automation software to external services) was logging sensitive information like bearer tokens and API keys when it received unauthorized requests to its HTTP endpoint, even though it correctly rejected those requests. This happened because the logs captured request metadata before checking authentication, which could expose secrets if logs were shared or stored outside secure boundaries.
Cisco discovered a serious vulnerability in how Anthropic (an AI company) stores and manages memories, which are pieces of information that AI systems keep between conversations. While Anthropic fixed this particular issue, security experts warn that poorly managed memory files remain a widespread risk to AI systems.
Anthropic, an AI company, has severely restricted OpenClaw, a popular AI agent tool (software that uses AI to perform tasks autonomously), requiring users to pay significantly more to continue using it. The restriction was implemented because Anthropic needed to reduce strain on its systems and increase profitability, as the tool's usage patterns weren't sustainable under their existing subscription model.
Fix: Upgrade Flowise to version 3.1.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to Flowise version 3.1.0 or later, where the vulnerability is fixed.
NVD/CVE DatabaseFix: Update to Flowise version 3.1.0, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to version 3.1.0 or later.
NVD/CVE DatabaseFix: Update Flowise to version 3.1.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to Flowise version 3.1.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseAnthropic's Claude Mythos model, which the company claimed was too dangerous to release publicly due to its advanced cybersecurity capabilities, was accessed by unauthorized users since the day the company announced it would share the model with selected companies for testing. The breach undermines Anthropic's reputation as a company focused on AI safety.
This academic paper proposes a dual-chain, privacy-preserving credential architecture designed to enable trust and learner agency in lifelong learning systems. The work focuses on creating secure credential management that protects learner privacy while maintaining verifiable educational records across multiple institutions and learning contexts.
Anthropic created Claude Mythos, an AI model that can autonomously find and exploit zero-day vulnerabilities (previously unknown security flaws that hackers don't yet know about), write code to exploit them, and potentially take over major operating systems and web browsers, but the company chose not to release it publicly due to these risks. To address the threat, Anthropic launched Project Glasswing, partnering with 40 organizations to help them "patch" (fix) vulnerabilities before attackers can exploit them, though all current partners are American companies.
Fix: Anthropic has named 40 organisations as partners under Project Glasswing to help mount a defence by asking them to "patch" vulnerabilities before hackers get a chance to exploit them.
The Guardian TechnologyFix: Upgrade to n8n-mcp v2.47.11 or later using 'npx n8n-mcp@latest' for npm or 'docker pull ghcr.io/czlonkowski/n8n-mcp:latest' for Docker. If immediate upgrade is not possible, restrict network access to the HTTP port using a firewall or reverse proxy, or switch to stdio transport mode by setting MCP_MODE=stdio.
GitHub Advisory DatabaseFix: Anthropic fixed the vulnerability that Cisco found. The source does not provide additional details about the specific fix, version numbers, or other mitigation steps.
Dark ReadingThis article discusses 'software brain,' a way of thinking that sees everything through algorithms and automation, which has been amplified by AI development. Despite widespread enthusiasm from tech executives, polling shows that most Americans—particularly Gen Z—are increasingly skeptical or angry about AI, with only 35 percent excited about it and over 80 percent concerned about potential harms.
Face morphing attacks (blending two faces together to fool facial recognition systems) threaten security systems used at borders and for digital identity checks, and detecting them from a single image is difficult because there's no trusted reference image to compare against. This paper presents R-FLoRA, a new detection method that combines high-frequency image analysis (looking at fine details) with a frozen, large-scale vision transformer (a type of AI model trained on images) to spot morphing artifacts while keeping the overall understanding of the face intact. The method outperforms nine other detection approaches on multiple test datasets and works efficiently in real-world biometric verification systems.