All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
The AWS API MCP Server (a tool that lets AI assistants interact with AWS services) has a vulnerability in versions 0.2.14 through 1.3.8 where attackers can bypass file access restrictions and read files they shouldn't be able to access, even when the server is configured to block file operations or limit them to a specific directory.
Fix: Upgrade to version 1.3.9 or later.
GitHub Advisory DatabaseAVideo's LiveLinks proxy endpoint validates URLs to block requests to internal networks, but only checks the initial URL. When a URL redirects (sends back a `Location` header pointing elsewhere), the code follows the redirect without re-validating the new target, letting attackers reach internal services like cloud metadata or private networks. The endpoint is also completely unauthenticated, so anyone can access it.
Langflow has an unauthenticated remote code execution vulnerability in its public flow build endpoint. The endpoint is designed to be public but incorrectly accepts attacker-supplied flow data containing arbitrary Python code, which gets executed without sandboxing when the flow is built. An attacker only needs to know a public flow's ID and can exploit this to run any code on the server.
AVideo's web installer endpoint (`install/checkConfiguration.php`) allows unauthenticated attackers to fully set up the application on fresh deployments by sending POST requests with attacker-controlled database credentials, admin passwords, and configuration values. Since the only protection is checking if a configuration file exists, attackers can take over uninitialized instances by pointing them to an attacker-controlled database and creating admin accounts with attacker-chosen passwords.
OpenAI released two new smaller AI models, GPT-5.4 mini and GPT-5.4 nano, that are cheaper and faster than previous versions. GPT-5.4 nano is particularly affordable at $0.20 per million input tokens, making it economical for tasks like image description, where describing 76,000 photos would cost around $52.
Kiro IDE, an AI-powered development environment for building autonomous software agents, has a vulnerability (CVE-2026-4295) that allows arbitrary code execution (running unintended commands on a system) when users open malicious project files. The flaw exists in versions before 0.8.0 due to improper trust boundary enforcement (failing to verify that data comes from a safe source).
The EU AI Act, effective August 2, 2026, classifies AI systems used in hiring and employment decisions (such as candidate screening, ranking, and performance monitoring) as high-risk and requires businesses that deploy them to conduct risk assessments, perform bias testing, maintain human oversight, and provide transparency disclosures. Staffing companies, recruitment platforms, and workforce intermediaries are responsible for compliance even if they did not build the technology, and this obligation applies globally if the AI system affects anyone in the EU.
Researchers discovered that Amazon Bedrock AgentCore Code Interpreter allows outbound DNS queries (the system that translates website names to IP addresses) even when configured with no network access, letting attackers steal data and run commands by using DNS as a secret communication channel. Amazon says this is intended functionality and recommends users switch to VPC mode (a virtual private network configuration) instead of sandbox mode for better isolation. Separately, a flaw in LangSmith (a tool for managing AI language model workflows) allows attackers to steal user login tokens through URL parameter injection (inserting malicious data into web addresses).
Google has expanded access to its Personal Intelligence feature, which connects various Google apps (like YouTube, Gmail, and Google Photos) to give Gemini (Google's AI assistant) more context for better responses. Previously available only to paid subscribers, this feature is now accessible to free-tier users in the US through Search, Chrome, and the Gemini app, though it remains limited to personal accounts and not business or education accounts.
Microsoft is reorganizing its AI leadership, moving Jacob Andreou into a new executive role overseeing both consumer and commercial Copilot assistants, while freeing up Mustafa Suleyman to focus on building new AI models as part of Microsoft's superintelligence (advanced AI systems aiming toward human-level reasoning) efforts. This restructuring comes as Microsoft's Copilot adoption lags significantly behind competitors like ChatGPT and Gemini, and as investors pressure the company to show returns on its AI investments.
Microsoft is reorganizing its leadership to unify its Copilot assistant (an AI tool that helps users with tasks) across consumer and business products, which have been developed separately. The AI CEO Mustafa Suleyman will now focus on building Microsoft's own AI models rather than directly managing Copilot's features for individual users.
AI coding tools like Claude Code are changing how software development works, with more people able to write code and experienced developers spending less time writing code themselves and more time managing AI agents (programs that can act somewhat autonomously) and projects. The article explores what these rapid changes mean for both the code being produced and the people who create it.
Surf AI, a company building an agentic security operations platform (software that uses AI agents, or autonomous programs that take actions without human intervention, to handle security tasks), has announced its launch with $57 million in funding from major investors. The article focuses on the company's funding announcement rather than a specific security issue or problem.
Microsoft has temporarily stopped automatically installing the Microsoft 365 Copilot app (an AI assistant integrated with productivity software like Word and Excel) on Windows devices outside the European Economic Area, though the company has not explained why the rollout was halted. When the automatic installation resumes, IT administrators will be able to disable it through the Microsoft 365 Apps admin center by unchecking the automatic installation setting.
Researchers discovered that AWS Bedrock's Sandbox mode for AI agents isn't as isolated as promised because it allows outbound DNS queries (requests to translate domain names into IP addresses), which attackers can exploit to secretly communicate with external servers, steal data, or run remote commands. AWS acknowledged the issue but decided not to patch it, calling DNS resolution an 'intended functionality' needed for the system to work properly, and instead updated their documentation to clarify this behavior.
OpenClaw, a framework for running AI agents (autonomous programs that can take actions) locally on devices rather than in the cloud, has faced security concerns since its rapid rise in early 2026. Nvidia announced NemoClaw, which addresses these vulnerabilities by using OpenShell, a security layer that includes kernel-level sandboxing (isolating programs from the core system) and a privacy router that monitors and blocks unauthorized data transfers by OpenClaw.
Fix: NemoClaw's OpenShell runtime isolates OpenClaw using kernel-level sandboxing and a 'privacy router' that monitors OpenClaw's behavior and communication with other systems, stepping in to block actions if it detects OpenClaw sending sensitive data somewhere it shouldn't. OpenShell is fully open source.
CSO OnlineFix: For Amazon Bedrock: migrate from Sandbox mode to VPC mode, implement a DNS firewall to filter outbound DNS traffic, audit IAM roles to follow the principle of least privilege (giving services only the minimum permissions they need), and use strict security groups and network ACLs. For LangSmith: update to version 0.12.71 or later (released December 2025), which addresses the token theft vulnerability.
The Hacker NewsFive major technology companies (Anthropic, AWS, Google, Microsoft, and OpenAI) have collectively invested $12.5 million into the Linux Foundation (a nonprofit organization that maintains critical open source software) to support long-term security improvements in open source projects. This funding aims to strengthen the security of widely-used software that many other programs depend on.
AI agents are autonomous software systems that can plan, decide, and act independently across connected systems, often without human oversight, creating significant security risks that traditional guardrails (like prompt filtering) cannot adequately address. The article argues that identity-based access control, rather than prompt restrictions or network controls, is the foundation for securing AI agents. CISOs must treat AI agents as first-class identities, shift from guardrails to strict access control, and eliminate shadow AI (unauthorized agents) through continuous discovery and visibility of agent identities.
Researchers discovered a font-rendering attack that hides malicious commands from AI assistants by using custom fonts and CSS styling to display one message to users while keeping harmless text visible to AI tools analyzing the webpage's HTML. The attack successfully tricked multiple popular AI assistants (like ChatGPT, Claude, and Copilot) into giving false safety assessments, exploiting the gap between what an AI reads in code and what a user actually sees rendered in their browser.
Fix: Microsoft was the only vendor that fully accepted and addressed the issue. LayerX recommends that AI assistants should analyze both the rendered visual page and the underlying code together and compare them to better evaluate safety. Additional recommendations to AI vendors include treating fonts as a potential attack surface, extending code parsers to scan for foreground/background color matches, near-zero opacity text, and abnormally small fonts.
BleepingComputerFix: According to the source, when automatic installation resumes, IT administrators can opt out by: signing into the Microsoft 365 Apps admin center, navigating to Customization > Device Configuration > Modern App Settings, selecting the Microsoft 365 Copilot app, and clearing the 'Enable automatic installation of Microsoft 365 Copilot app' checkbox. Additionally, the source mentions that Microsoft is testing a new policy called RemoveMicrosoftCopilotApp that would allow IT admins to uninstall Copilot from devices managed via Microsoft Intune or System Center Configuration Manager (SCCM, software for managing large numbers of computers).
BleepingComputerThis newsletter roundup covers multiple AI-related developments, including OpenAI's partnership with the US military (potentially for applications like selecting strike targets), xAI's Grok facing a lawsuit over generating non-consensual intimate images (deepfakes, or synthetic media created to impersonate real people), and China approving the world's first commercial brain chip (a BCI, or brain-computer interface that reads signals from the brain) for medical use. The piece also highlights concerns from AI safety experts, including OpenAI's own wellbeing team opposing a new 'adult mode' feature.
Fix: AWS updated documentation to clarify that Sandbox mode permits DNS resolution. Security teams should inventory all active AgentCore Code Interpreter instances and migrate to VPC mode (a more restricted network environment).
CSO Online