All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Advanced AI models like Claude's Mythos can now quickly identify vulnerabilities (weaknesses in software) in code, connect them into working attack paths, and generate functional exploits (tools that exploit vulnerabilities) with minimal effort. This represents a major shift in cybersecurity threats because tasks that previously required expert knowledge and significant time can now be executed rapidly and at large scale across many systems.
A critical vulnerability called Bleeding Llama (CVE-2026-7482, CVSS score 9.3) affects Ollama, an open source tool for running large language models (LLMs, AI systems trained on massive amounts of text) on local machines. An attacker can exploit a heap out-of-bounds read (a bug where the program accesses memory it shouldn't) to steal sensitive data like API keys, passwords, and user messages from approximately 300,000 internet-exposed Ollama deployments without needing any authentication.
A scan of over 1 million exposed AI services found that self-hosted AI infrastructure has worse security than any other software previously investigated, with major problems including no authentication enabled by default, freely accessible chatbots that expose user conversations and can be abused to bypass safety guardrails (restrictions built into AI models to prevent harmful outputs), and exposed agent management platforms (tools like n8n and Flowise that automate AI workflows) that reveal business logic, API keys (secret credentials for accessing external services), and access to connected third-party systems. These misconfigurations leave real user data and company tools vulnerable to attackers, with consequences ranging from reputational damage to full system compromise.
Google DeepMind employees have voted to unionize, asking management to recognize their union representatives in an effort to prevent the company's AI technology from being used by the Israeli and US militaries. The unionization effort reflects employee concerns that their AI models may be complicit in international law violations, particularly regarding the Israeli-Palestinian conflict.
GPT-5.5 Instant is OpenAI's latest fast-response AI model that uses safety methods similar to previous versions, but is the first Instant model classified as having high capability in cybersecurity and biological/chemical preparedness risks, so it has additional safeguards in place. The document clarifies naming conventions to avoid confusion: GPT-5.5 Instant (also called gpt-5.5-instant) should be compared to GPT-5.3 Instant, and the full GPT-5.5 model is referred to as GPT-5.5 Thinking.
OpenAI and partners (AMD, Broadcom, Intel, Microsoft, NVIDIA) developed MRC (Multipath Reliable Connection), a new networking protocol that improves data transfer speed and reliability in supercomputer clusters used for AI model training. MRC addresses key challenges in large-scale AI training by reducing network congestion through adaptive packet spraying (distributing data across multiple paths), enabling redundancy to tolerate failures, and using static source routing (predetermined paths that bypass failed connections) to prevent training jobs from crashing when network failures occur.
OpenAI has released GPT-5.5 Instant, an updated version of ChatGPT's default model that aims to provide smarter, more accurate answers with clearer language and better personalization based on your conversation history. The new model produces 52.5% fewer hallucinated claims (false or made-up statements) compared to the previous version on high-stakes topics like medicine and law, and includes a new 'memory sources' feature that shows you what past context was used to personalize your responses, giving you control to edit or delete outdated information.
Cybersecurity leaders face a critical shortage of skilled workers, with 95% of organizations reporting at least one security skills gap and AI identified as the most pressing skill need. While some companies address this by investing in in-house training to develop employees from other technical fields into security roles (a process taking up to two years), AI both helps automate some defensive tasks and simultaneously worsens the problem by enabling attackers to operate at larger scales, increasing overall demand for skilled defenders.
Workers at Google DeepMind's UK laboratory voted to form a union, citing concerns about a recently announced deal between Google and the US military. The workers, represented by two unions, worry that the military partnership raises ethical questions about the company's responsibility in developing AI technology.
The GeekyBot WordPress plugin (up to version 1.2.0) has a SQL injection vulnerability (a type of attack where hackers insert malicious database commands into user input) in the 'attributekey' parameter. Because the plugin doesn't properly clean user input or secure its database queries, unauthenticated attackers can add extra SQL commands to extract sensitive data from the site's database.
Datasette-llm 0.1a7 is a plugin (a software add-on) that lets other plugins use AI models in a coordinated way. The release adds a feature to set default options for specific models, such as specifying which model to use for enrichment operations (adding data to existing information) and adjusting its temperature parameter (a setting that controls how creative or random the AI's responses are).
llm-echo 0.5a0 is a debug plugin (a tool that helps developers test code) for LLM that provides a fake AI model called "echo" for testing purposes instead of running a real LLM. The new version adds a "-o thinking 1" option to simulate reasoning blocks (the internal steps an AI uses to work through problems) and is compatible with LLM 0.32a0 and higher.
Between February and April 2026, the ogham-mcp package accidentally published 22 versions on PyPI (the Python package repository) with embedded credentials, including database passwords for Neon postgres (a database service) and a Voyage AI API key (a token that grants access to an AI service). No evidence of actual misuse was found, and all credentials have been rotated by the maintainers.
OpenAI is expanding its ChatGPT advertising pilot by introducing new tools that make it easier for businesses to create and buy ads. Advertisers can now use a beta self-serve Ads Manager (a tool for setting up and managing ad campaigns) or work through partners, and can choose between cost-per-click (CPC, paying only when someone clicks an ad) or cost-per-mille (CPM, paying per 1,000 ad views) bidding options. The platform includes measurement tools that let advertisers see campaign performance without accessing user conversations, maintaining privacy.
This article covers legal testimony from OpenAI president Greg Brockman in Elon Musk's lawsuit against OpenAI, focusing on his evasive responses and pedantic corrections during cross-examination. The piece suggests Brockman's journal entries are key evidence in the case, while highlighting his reluctance to directly answer questions.
James Dyett, a senior sales leader at OpenAI who managed enterprise and API (application programming interface, a set of tools that lets different software communicate) sales, is leaving the company to join venture capital firm Thrive Capital. His departure is the latest in a series of leadership changes at OpenAI, following exits by several other executives in recent months.
OpenAI and PwC are collaborating to help finance teams use AI agents (software programs that can autonomously perform tasks) to automate workflows, reduce manual work, and improve decision-making in finance departments. The partnership is building these agents based on real-world experience from OpenAI's own finance organization, where they have already seen results like processing 5 times more contracts with the same team size.
A nil pointer dereference (accessing data at a null memory address) in Argo Workflows v4.0.4 causes the server to crash with an HTTP 500 error for SSO (single sign-on) users when RBAC delegation (role-based access control rules delegated to namespaces) is enabled. This happens specifically when a user's SSO claims match a namespace-level RBAC rule but not an SSO-namespace rule, causing a permanent denial of service (inability to use the system) for affected users.
Fix: The vulnerability was addressed in Ollama version 0.17.1. Organizations should apply this fix as soon as possible, restrict network access to their deployments, deploy an authentication proxy (a middleman service that requires login), use network segmentation (isolating systems from the internet), and audit running instances for internet exposure. Any instance accessible from the internet should be considered compromised.
SecurityWeekThis article explains two security bugs found in C/C++ code samples: a Linux ping program vulnerable to command injection because inet_ntoa (a function that converts IP addresses to text) returns a pointer to a global buffer that gets overwritten by subsequent calls, allowing an attacker to bypass IP validation checks; and a Windows driver with a registry type confusion vulnerability where missing validation flags can escalate from a local denial of service to kernel write access (the ability to modify system memory).
Fix: The article mentions that a new Claude skill called 'c-review' was developed to help find these bugs by turning the C/C++ security checklist into prompts that an LLM can run against a codebase. However, no explicit code fixes, patches, or specific mitigation steps for the vulnerabilities themselves are provided in the source text.
Trail of Bits BlogFix: MRC has been released through the Open Compute Project (OCP) as an open standard for the industry to use. The specification extends RDMA over Converged Ethernet (RoCE, a hardware-accelerated data transfer standard) and incorporates SRv6-based source routing to support large-scale AI networking fabrics.
OpenAI BlogFix: The source mentions the following controls and mitigations for personalization concerns: Users can delete chats they no longer want cited, delete or change items in saved memories through settings, or use temporary chats that don't use or update memory. When a response is personalized, users can see what context was used in 'memory sources' and delete or correct outdated information. Memory sources are not shown to others if you share a chat. The source also notes that 'memory sources are designed to make personalization easier to understand' and OpenAI plans to make this feature 'more comprehensive over time.'
OpenAI BlogFix: Some CISOs address skills gaps through in-house training and development: hiring people with solid technical foundations in areas like networking, server administration, or software development, then transitioning them into security roles over approximately two years. Additionally, security leaders are encouraging their teams to leverage AI tools and examine how vendors are using AI, recognizing that AI competency will be essential in cybersecurity's future.
CSO OnlineThe Trump administration is considering requiring advanced AI models to be reviewed before public release, particularly those capable of helping users find software vulnerabilities (weaknesses in code that attackers can exploit). This discussion was prompted by Anthropic's Mythos model, which can identify thousands of high-severity vulnerabilities better than most human programmers, though the company has not released it publicly and instead created Project Glasswing to give selected companies access for defensive purposes (finding and fixing vulnerabilities before attackers do).
Fix: Upgrade to v0.11.1 immediately by running: pip install --upgrade "ogham-mcp>=0.11.1". This version removes the leaked credentials and adds automated scanning to prevent future credential leaks. Users do not need to rotate credentials on their own end, as the exposed credentials belonged to the project maintainers, not to users.
GitHub Advisory DatabaseFix: The source suggests adding a nil check: `if loginAccount == nil || precedence(namespaceAccount) > precedence(loginAccount)` at line 304 in gatekeeper.go to prevent the nil pointer dereference.
GitHub Advisory Database