All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
A vulnerability (CVE-2026-7061) was found in Toowiredd chatgpt-mcp-server version 0.1.0 that allows OS command injection (running unauthorized system commands on a server through malicious input) in the MCP/HTTP component. The flaw can be exploited remotely by attackers, and public exploit code is already available, but the developers have not yet responded to the security report.
Elon Musk is suing Sam Altman and OpenAI in court, claiming that Altman broke the company's original founding agreement. The lawsuit focuses on OpenAI's early years when it was started as a nonprofit, and the trial could influence the direction of AI development in the tech industry.
The Cannes Film Festival banned AI-generated content from its main competition (the Palme d'Or), arguing that AI cannot create emotionally meaningful work. However, a new World AI Film Festival (WAIFF) launched at the same event and showcased AI-generated films, attracting investment from major tech companies and Hollywood studios, suggesting a growing movement to create cinema with generative AI (artificial intelligence systems that can produce images, text, or video).
A security flaw called CVE-2026-7020 was found in Ollama versions up to 0.20.2 that allows path traversal (an attack where someone manipulates file paths to access files they shouldn't be able to reach) through the digestToPath function in the Tensor Model Transfer Handler component. An attacker can exploit this remotely, though it requires high complexity to perform, and the vulnerability details have been released publicly.
LiteLLM had a security flaw in two test endpoints (`POST /mcp-rest/test/connection` and `POST /mcp-rest/test/tools/list`) that allowed authenticated users to run arbitrary commands on the server. These endpoints accepted server configurations including command and arguments, and would execute them as subprocesses with the proxy's privileges, even for users with low-level permissions.
Top software executives from companies like Salesforce, Snowflake, and Datadog are being recruited by AI companies OpenAI and Anthropic with large compensation packages, because these AI giants want their expertise in selling to enterprise customers (large organizations). This talent drain is part of a broader shift where AI companies are prioritizing business growth in the enterprise segment, which is more profitable, while traditional software companies are struggling with concerns that AI tools will disrupt their business models.
Tesla and other automakers are integrating AI chatbots like Grok (xAI's conversational AI assistant) into vehicles to provide hands-free information access, but safety experts warn these tools create dangerous distractions for drivers. A Tesla owner demonstrated how engaging with Grok while driving—even with Tesla's partially automated driving system (FSD, or Full Self-Driving Supervised) active—caused him to lose attention to the road, raising concerns about driver distraction that isn't yet well understood.
OpenAI has released a prompting guide for GPT-5.5 (a new version of their language model), which includes tips for improving user experience and migrating existing code. One key recommendation is to send brief status updates to users before starting multi-step tasks, so long-running operations don't appear frozen. The guide also advises treating GPT-5.5 as a new model family rather than a drop-in replacement, suggesting developers start fresh with minimal prompts (instructions given to the AI) and gradually tune them for the new model instead of reusing old ones.
LLM version 0.31 adds support for the new GPT-5.5 model and introduces two new command-line options: one to control text verbosity (how much detail the AI outputs) for GPT-5+ models, and another to set image detail levels for images sent to OpenAI models. The release also registers models from a configuration file (extra-openai-models.yaml) as asynchronous (able to run multiple requests without waiting for each to finish).
DeepSeek released V4, an open-source AI model (software available for anyone to download and modify) that can process much longer text inputs than previous versions and offers performance comparable to top commercial models at significantly lower costs. The model comes in two versions: V4-Pro for complex coding tasks and V4-Flash for faster, cheaper operation, with both offering reasoning modes (where the model shows its step-by-step thinking). This release matters because it demonstrates that open-source models can compete with expensive commercial alternatives, potentially allowing developers to access advanced AI capabilities without high costs.
LangChain (a framework for building AI agents and applications powered by large language models) versions before 1.1.14 had a TOCTOU vulnerability (time-of-check-time-of-use, where a security check and an action happen at different times with a gap in between) in its image token counting feature. An attacker could trick the system by making a hostname first resolve to a safe public IP address during a security check, then resolve to a private or localhost IP address during the actual network request, bypassing security protections.
LangChain's HTMLHeaderTextSplitter had a security flaw where it validated URLs initially but then followed redirects (automatic forwarding to different URLs) without rechecking them, allowing attackers to redirect requests to internal or sensitive servers and potentially leak data. This SSRF vulnerability (server-side request forgery, where an attacker tricks a server into making requests to unintended locations) was fixed in version 1.1.2.
US House Republicans introduced two privacy bills (SECURE Data Act and GUARD Financial Data Act) that would create national privacy standards but weaken enforcement by eliminating private lawsuits and overriding stronger state privacy laws like California's. Privacy advocates criticize the bills as inadequate because their data minimization rules (the principle that companies should collect only necessary data and retain it only as long as needed) tie collection limits to what companies voluntarily disclose rather than imposing stricter necessity requirements.
Gemini CLI had two security vulnerabilities that could allow remote code execution (running malicious code on a system). First, in headless mode (non-interactive environments like CI/CD pipelines), the tool automatically trusted workspace folders and loaded configuration files without verification, which could be exploited through malicious environment variables. Second, the `--yolo` flag bypassed tool allowlisting (restrictions on what commands can run), allowing unrestricted command execution via prompt injection (tricking the AI by hiding instructions in its input). Version 0.39.1 and later now require explicit folder trust and enforce tool allowlisting even in `--yolo` mode.
Scattered Spider is a criminal gang that hacks into company computer systems to steal virtual currency, using social engineering attacks (tricks that manipulate people into revealing information) like SMS phishing (fake text messages with malicious links) and impersonating employees to deceive help desks. Despite several arrests in 2024, some members remain active and continue attacking businesses, so security leaders are being warned to stay alert.
This research paper evaluates whether multiple AI agents working together can effectively help identify privacy threats in software systems using LINDDUN GO, a structured methodology for privacy threat modeling (a process of identifying ways a system could leak or misuse personal data). The study, published in July 2026, examines whether collaborative multi-agent LLM (large language model) systems can improve the quality and completeness of privacy threat identification compared to single AI agents or human analysis.
n8n-mcp (a tool for connecting AI systems to external services) was logging sensitive information like passwords and API keys when running in HTTP mode (a way to communicate over the internet). When authenticated users made requests to call tools, their secret credentials were written to server logs before being hidden, which could expose them if logs were shared or accessed by unauthorized people. The issue only affected HTTP mode and required authentication, so it couldn't be exploited by random internet users.
Fix: Upgrade to n8n-mcp v2.47.13 or later using either `npx n8n-mcp@latest` (npm) or `docker pull ghcr.io/czlonkowski/n8n-mcp:latest` (Docker). The patch changes how tool arguments are logged by using a `summarizeToolCallArgs` function that records only the structure and size of data, never the actual secret values. As a temporary workaround if you cannot upgrade immediately: restrict HTTP port access through firewall or VPN, limit who can read server logs, or switch to stdio transport mode (`MCP_MODE=stdio`).
GitHub Advisory DatabaseFix: Fixed in version 1.83.7. Both test endpoints now require the `PROXY_ADMIN` role (a permission level for administrators only). As a temporary workaround, developers should block `POST /mcp-rest/test/connection` and `POST /mcp-rest/test/tools/list` at their reverse proxy or API gateway (the server that sits between users and the application to filter traffic).
GitHub Advisory DatabaseA group of Discord users gained unauthorized access to Anthropic's Mythos Preview (a restricted AI model designed to find security vulnerabilities) by examining data from a breach of Mercor (an AI training startup) and making an educated guess about the model's online location based on Anthropic's known URL patterns. They exploited this access to build simple websites rather than conduct more harmful activities, potentially avoiding detection by Anthropic.
Fix: OpenAI recommends running the command "$openai-docs migrate this project to gpt-5.5" in Codex to upgrade existing code. For manual migration, OpenAI advises: begin with a fresh baseline instead of carrying over every instruction from older prompts, start with the smallest prompt that preserves the product contract, then tune reasoning effort, verbosity, tool descriptions, and output format against representative examples.
Simon Willison's WeblogOpenAI's leader Sam Altman apologized for not reporting a ChatGPT account to police before a mass shooting in Canada killed eight people in January, even though the company had identified and banned the account for problematic usage. OpenAI stated it did not alert law enforcement because the account activity did not meet the company's threshold for showing a credible or imminent plan for serious physical harm. The company now faces lawsuits and a criminal investigation related to this incident and another shooting.
Fix: OpenAI has said it will strengthen its safety measures and will continue to focus on working with all levels of government to help ensure similar incidents do not happen again.
BBC TechnologyFix: Update langchain-openai to version 1.1.14 or later.
NVD/CVE DatabaseFix: Update langchain-text-splitters to version 1.1.2 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to Gemini CLI version 0.39.1 or 0.40.0-preview.3. For workflows running on trusted inputs, set the environment variable `GEMINI_TRUST_WORKSPACE: 'true'` in your GitHub Actions workflow. For workflows processing untrusted inputs, review the guidance at https://github.com/google-github-actions/run-gemini-cli to harden your workflow against malicious content and set the same environment variable after implementing appropriate security measures. If you have specified a specific version of gemini_cli, upgrade to one of the patched versions and audit your workflow settings.
GitHub Advisory DatabaseAnthropic's Claude Mythos, an AI model designed to find bugs in software, has been distributed to select government agencies and industry groups through a program called Project Glasswing, but the US cybersecurity agency CISA does not have access yet. Unauthorized users from a private Discord community have also gained access to Mythos and have been using it regularly, raising concerns since the model could potentially be used to discover and exploit software vulnerabilities.