All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
This paper describes a new encryption method called FDXT that helps protect data privacy when searching encrypted files on untrusted servers. Previous methods like ODXT and SDSSE-CQ had weaknesses where attackers could leak information by analyzing search patterns and file sizes when users searched for multiple keywords together, but FDXT fixes these privacy leaks while maintaining similar or better performance.
Microsoft is releasing Agent Mode (previously called 'vibe working') in Office applications like Word, Excel, and PowerPoint, which is a more advanced version of Copilot (an AI assistant) that can actively perform tasks in documents rather than just answer questions. Previously, the AI models weren't powerful enough to let Copilot directly control applications, so it could only provide passive help like answering user questions.
OpenAI released GPT-5.5, a more intelligent AI model that can handle complex, multi-step tasks like coding, research, and data analysis with less human guidance than previous versions. The model matches the speed of its predecessor while performing at a higher level and using fewer tokens (individual pieces of text that the AI processes). OpenAI says it tested GPT-5.5 with safety experts and external reviewers before release to reduce misuse risks.
GPT-5.5 is a new AI model from OpenAI designed to handle complex work tasks like coding, research, and document creation with less user guidance than previous models. OpenAI conducted extensive safety testing including red-teaming (simulated attacks by security experts to find vulnerabilities) and feedback from nearly 200 early partners before release, and deployed it with what they describe as their strongest safeguards to date.
Anthropic is hiring for a senior role to negotiate data center deals in Europe to support its AI expansion, as the company secures major infrastructure commitments like a $100+ billion spending plan with Amazon Web Services and capacity deals with Broadcom. The company is specifically targeting data center capacity in major European hubs (Frankfurt, London, Amsterdam, Paris, Dublin) and regions like the Nordics, where cheap energy makes AI infrastructure more affordable. This move reflects a broader industry trend, with Microsoft, OpenAI, and other AI companies also expanding their European data center operations.
Paperclip is a Node.js server (a JavaScript runtime that runs outside web browsers) with a React UI (a framework for building user interfaces) that manages multiple AI agents to automate business tasks. Before version 2026.416.0, an attacker without any login credentials could gain full remote code execution (the ability to run arbitrary commands on the target system) on any publicly accessible Paperclip instance using its default settings, simply by knowing the server's address and making six automated API calls (requests to the server's functions).
Paperclip is a Node.js server and React UI that manages multiple AI agents to run a business. Versions before 2026.416.0 have a privilege escalation vulnerability where an attacker with an agent API key (a credential that identifies an agent) can trick the system into running arbitrary OS commands (unauthorized instructions executed on the computer) on the Paperclip server by injecting malicious commands into a configuration field that the server later executes.
A vulnerability (CVE-2026-6874) was found in ericc-ch copilot-api version 0.7.0 and earlier that affects the /token file's Header Handler component. An attacker can manipulate the Host argument to exploit reliance on reverse DNS resolution (looking up a domain name from an IP address), potentially allowing remote access to systems where the attacker has login credentials.
OpenAI has released workspace agents (AI systems that can independently perform tasks) for users on Business, Enterprise, Edu, and Teachers plans within ChatGPT. These agents can handle business tasks like gathering product feedback and drafting emails, building on growing interest in autonomous AI agents across the industry.
A Chinese cybersecurity company called 360 Digital Security Group claims to have discovered 1,000 vulnerabilities (weaknesses in software that attackers can exploit) using AI tools, including some vulnerabilities found at the Tianfu Cup hacking contest. The article compares these claims to myths about Claude (an AI system), suggesting skepticism about the actual capabilities being reported.
Google announced new AI agents and security tools designed to help security teams defend against AI-based attacks, particularly in response to threats like Anthropic Mythos. The company introduced three new agents within Google Security Operations to automate threat detection and response, expanded the Wiz platform to provide visibility across multiple cloud environments and AI development tools, and created new security measures like AI-BOM (a system that catalogs all AI components used in an organization) and Agent Gateway to govern how AI agents interact with each other and enforce security policies.
Fix: Google's explicit mitigations include: (1) Three new AI agents in Google Security Operations for threat hunting, detection engineering, and third-party context enrichment, now in or entering preview; (2) Wiz expansion supporting AWS, Azure, Databricks, AWS Agentcore, Gemini Enterprise Agent Platform, Microsoft Azure Copilot Studio, and Salesforce Agentforce with inline scanning of AI-generated code and AI-BOM inventory; (3) Agent Identity and Agent Gateway for governance and policy enforcement; (4) Deeper integrations for Model Armor to mitigate prompt injection (tricking an AI by hiding instructions in its input) and data leakage; (5) Reworked bot and fraud detection through Google Cloud Fraud Defense to distinguish between humans, bots, and AI agents.
CSO OnlineGoogle announced new AI agents and security tools designed to help security teams keep pace with the increasing number of vulnerabilities and cyber threats. The company introduced three new agents embedded in Google Security Operations (for threat hunting, detection engineering, and gathering external intelligence), expanded the Wiz security platform to monitor AI development across multiple clouds, and created tools like AI-BOM (AI bill of materials, an inventory of all AI components used in an organization) and Agent Gateway to secure interactions between AI agents. These moves represent a shift toward automated, agent-based defense rather than relying solely on human analysts.
Fix: Google's announced solutions include: three new AI agents in Google Security Operations for threat hunting and detection engineering (in preview); a threat intelligence enrichment agent (entering preview); expanded Wiz integration supporting AWS, Azure, Databricks, and agent studios like Gemini Enterprise Agent Platform; inline scanning of AI-generated code; AI-BOM for inventorying AI components to address shadow AI; Agent Identity and Agent Gateway for governance and policy enforcement; and deeper Model Armor integrations to mitigate prompt injection (tricking an AI by hiding instructions in its input) and data leakage risks.
CSO OnlineTrailmark is an open-source library that converts source code into a queryable call graph (a visual map of how functions and classes connect to each other) that AI systems like Claude can analyze directly. Rather than examining code as flat lists of findings, Trailmark lets AI reason about code structure as a graph, making it better at identifying security risks like whether untrusted input can reach vulnerable code.
Anthropic's Project Glasswing uses an AI model called Mythos that is extraordinarily effective at finding software vulnerabilities, discovering bugs that humans missed for decades and even chaining multiple bugs together into working exploits. However, the critical problem is that fewer than 1% of vulnerabilities Mythos finds are actually patched, revealing a massive gap between how fast AI can discover security flaws (machine speed) and how fast human teams can fix them (calendar speed, typically four days per cycle).
Researchers at Palo Alto Networks built an autonomous multi-agent AI system called Zealot to test whether AI could independently perform cloud attacks. The system successfully chained together multiple exploitation techniques (SSRF, credential theft, and data theft) against a test Google Cloud environment, demonstrating that AI acts as a force multiplier for known cloud misconfigurations rather than creating entirely new vulnerabilities.
Microsoft is integrating Anthropic's Mythos, an advanced AI model, into its Security Development Lifecycle to help find software vulnerabilities (security flaws in code) and strengthen code earlier in development. While this move signals that AI is becoming central to how major software companies build secure products, analysts note that powerful AI models like Mythos could also make it faster for attackers to find and exploit vulnerabilities, raising concerns about the dual-use nature of these tools.
Fix: Update to version 2026.416.0, which patches the vulnerability.
NVD/CVE DatabaseFix: @paperclipai/server version 2026.416.0 fixes the issue.
NVD/CVE DatabaseClaude Mythos, an AI model from Anthropic, discovered 271 vulnerabilities in Firefox 148, more than ten times what previous AI tools found, demonstrating AI's growing ability to uncover security bugs at scale. All 271 flaws were fixed in Firefox 150's release. While the AI isn't finding entirely new types of bugs, it's closing gaps in vulnerability detection that fuzzing (automated testing that uncovers bugs in source code) and human teams had previously missed, potentially shifting the balance in favor of defenders.
Fix: All 271 vulnerabilities discovered in Firefox 148 have been fixed in Firefox 150.
CSO OnlineMarimo has a pre-authorization remote code execution vulnerability (RCE, where an attacker can run commands on a system they don't own) that allows unauthenticated attackers to gain shell access and execute arbitrary commands without needing to log in first. This vulnerability is actively being exploited in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesOpenAI is running a bug bounty program called the Bio Bug Bounty for GPT-5.5, inviting security researchers to find universal jailbreaks (methods to bypass safety restrictions with a single prompt) that can defeat five biology safety questions. The program offers $25,000 for the first successful universal jailbreak and smaller awards for partial results, with applications open from April 23 to June 22, 2026 and testing running through July 27, 2026.
IBM CEO Arvind Krishna stated that geopolitical uncertainty, particularly the Iran conflict, is causing the company to provide cautious financial guidance despite beating first-quarter earnings expectations. He also expressed concerns about potential economic slowdowns affecting consumer spending and European growth, though he noted IBM's Middle East business performed well. Additionally, Krishna discussed how new AI models like Anthropic's Mythos, which can find security vulnerabilities at unprecedented speed, will likely be replicated by competitors and pose significant cybersecurity concerns that have caught the attention of U.S. government officials.