All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
The Pentagon has signed agreements with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection to use their AI tools in classified military settings, but excluded Anthropic after labeling it a supply-chain risk (a potential weak point in security). This expands earlier deals that allowed some companies like OpenAI and xAI to provide AI systems for authorized military use.
This article discusses a legal case where Elon Musk is suing OpenAI (an AI company), claiming they stole a nonprofit organization and that he was the main force behind their success. During his testimony in court, Musk had a difficult time, arguing with lawyers and changing his statements, with indications suggesting he is unlikely to win the case.
Gig workers on platforms like Fiverr are increasingly using generative AI (artificial intelligence systems that create text, images, or video) to quickly produce cheap content for clients, particularly AI-generated Bible story animations shared on social media. This represents a shift from the platform's original purpose of connecting clients with skilled freelancers who developed their expertise over years.
Microsoft is launching a new AI agent within Word that is designed specifically for legal teams to help with tasks like reviewing contracts and managing document edits. Unlike general AI models, the Legal Agent follows structured workflows (predetermined sets of steps) based on actual legal practices, handling specific repeatable tasks like reviewing contract clauses against a predefined playbook (a set of rules or guidelines).
Business email compromise (BEC, a scam where attackers trick employees into sending money by impersonating trusted contacts) continues to succeed even when organizations use MFA (multi-factor authentication, a security method requiring multiple forms of ID to access accounts) because attackers exploit human behavior and business processes rather than stealing credentials. Real attacks like the Toyota case (where an employee transferred $30 million based on a fake urgent email) and the Arup case (where deepfake technology impersonated a manager) show that the weakest point is often the human decision-maker approving payments, not the technical security controls.
OT (operational technology, the systems that control physical industrial processes like power plants or factories) cyber risk requires a different management approach than IT security because OT systems have long lifecycles, limited patching windows, and third-party dependencies that create unique vulnerabilities. The article argues that managing OT risk at scale is fundamentally a leadership and governance challenge rather than a purely technical problem, requiring consistent decision-making across all sites and clear accountability structures.
AI is changing how software is developed by affecting coding practices, tools, developer roles, and the overall development process across all stages, from initial planning through maintenance. The article discusses how AI agents are being integrated throughout the software development life cycle (the complete process of creating and maintaining software, from concept to deployment).
Threat actors are abusing AI distribution platforms like Hugging Face and ClawHub to spread malware by uploading trojanized files (files containing hidden malicious code) that trick users into downloading them through social engineering. The attackers use indirect prompt injection (embedding hidden instructions in data that AI systems read and execute without the user knowing) to make AI agents automatically download and run malware on users' computers, with hundreds of malicious files identified across both platforms.
A serious vulnerability called Copy Fail (CVE-2026-31431) in the Linux kernel allows unprivileged users to gain root access (the highest permission level) through a simple exploit, affecting virtually all Linux systems since 2017. With root access, attackers can steal or delete data. Until Linux distributions release patches, the main defense is monitoring for unauthorized privilege escalation attempts.
N/A -- The provided content is a metadata header and navigation element from a web page, not an actual article or analysis. It contains only a title, date, author attribution, topic tags, and sponsorship information with no substantive technical content about GPT-5.5, cyber capabilities, or any security findings to summarize.
IBM Langflow Desktop versions 1.0.0 through 1.8.4 contains a code injection vulnerability (CWE-94, a flaw where attackers can insert and execute their own code) that allows attackers to run arbitrary commands (any commands an attacker chooses) with the same permissions as the Langflow application. This could let attackers steal sensitive information like API keys and database passwords, modify files, or attack other systems on the network.
IBM Langflow OSS (open-source software) versions 1.0.0 through 1.8.4 has a vulnerability where any user can view and delete other users' data by supplying a flow_id (a reference number for a workflow). This happens because the system doesn't properly check who should be allowed to access certain information, allowing unauthorized access to transaction logs and build data.
CVE-2026-40687 is a vulnerability in Exim email software (before version 4.99.2) where the SPA authentication driver (a method for verifying user identity) can be exploited with a malicious SPA resource to cause an out-of-bounds write (writing data to memory locations outside the intended area), which crashes the email connection or exposes uninitialized heap memory data (unused memory that may contain sensitive information).
IBM Langflow Desktop version 1.8.4 and earlier has a path traversal vulnerability (CWE-22, a flaw that lets attackers access files outside intended directories) that allows remote attackers to view arbitrary files on a system by sending specially crafted URLs containing "dot dot" sequences (/../), which trick the system into navigating to restricted folders.
The Pentagon's chief technology officer stated that Anthropic remains classified as a supply chain risk (a designation meaning the company's technology threatens U.S. national security), but Anthropic's Mythos AI model, which has advanced capabilities for finding and fixing cyber vulnerabilities, is being treated as a separate urgent national security issue requiring the Department of Defense to strengthen its networks. The DOD has blacklisted Anthropic from working with defense contractors, though the agency is reportedly using Mythos internally and is open to negotiations about safeguards (called guardrails, or restrictions on how the AI can be used) if Anthropic agrees to terms similar to those negotiated with other AI companies.
Goodfire, a San Francisco startup, released Silico, a tool that uses mechanistic interpretability (a technique for understanding how AI models work by mapping their internal neurons and connections) to let researchers see inside AI models and adjust their parameters during training. The tool aims to give developers more control over AI behavior by exposing internal 'knobs and dials' so they can reduce unwanted outputs, making AI development more like traditional software engineering rather than trial-and-error.
Fix: The source describes Silico as the solution itself—it uses mechanistic interpretability to map neurons and pathways inside a model and lets developers tweak them to reduce unwanted behaviors or steer outputs. No additional mitigation steps or fixes beyond using this tool are mentioned in the text.
MIT Technology ReviewCISA and international cybersecurity partners released guidance for organizations adopting agentic AI (AI systems that can take actions autonomously on behalf of users). The guidance identifies security challenges with these systems and provides steps for safely designing, deploying, and operating them while connecting AI risk management to existing cybersecurity practices.
Organizations often use AI models from online repositories like HuggingFace without tracking their changes, verifications, or vulnerabilities, which can lead to security risks if models are poisoned (containing hidden malicious code) or contain training biases. Cisco released the Model Provenance Kit, an open source Python-based tool that creates a unique 'fingerprint' for each model using metadata and other signals, allowing organizations to compare models and trace their origins to address these tracking and accountability problems.
Fix: The Model Provenance Kit from Cisco is available on GitHub. The tool has two modes: 'compare' mode enables users to compare two models to identify shared lineage, and 'scan' mode attempts to find the closest lineage for a given model by comparing its fingerprint against Cisco's database of fingerprints. Cisco's dataset of base model fingerprints is also available on Hugging Face.
SecurityWeekFix: The source explicitly recommends: (1) redesigning approval workflows so high-value transactions require multi-step verification including out-of-band calls (verification methods using a separate communication channel, like a phone call to confirm an email request); (2) simulating BEC scenarios in realistic exercises to identify gaps in response and decision-making; (3) embedding security awareness into daily routines using micro-learning and real incident reviews; (4) empowering teams to challenge unusual requests without fear of reprisal; (5) sharing instances of successful attacks with employees who distribute invoices and oversee financial decisions; and (6) explicitly defining what constitutes high-risk requests, such as first-time payments, changes to vendor banking details, sudden payment requests from executives, or requests that bypass standard procedures.
CSO OnlineFix: Apply kernel patches from your Linux distribution as soon as they are released, and reboot systems after patching. According to the source, 'As soon as patches are available for what's been dubbed the Copy Fail logic bug... As of midday Thursday, only Arch Linux had released a patch,' but other distributions are expected to release patches within days. For Debian, Ubuntu, and Debian-based systems, the exploitable code can be disabled via kernel commands before patches are available, though this option is not feasible in large environments according to the source.
CSO OnlineThe Linux Kernel has a vulnerability where system resources are incorrectly transferred between different security zones, potentially allowing an attacker to gain elevated privileges (privilege escalation, meaning they can perform actions normally restricted to administrators). This vulnerability is currently being exploited by attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesAustralia's financial regulator (APRA) warns that advanced AI models like Claude Mythos could give attackers powerful tools to find security flaws faster than banks can fix them, threatening the banking sector. The regulator found that banks treat AI as just another technology and lack proper processes to identify and patch vulnerabilities quickly enough to keep up with AI-assisted attacks. APRA calls for urgent overhauls to governance, vulnerability testing, and security assessment of AI platforms.
Fix: APRA identifies the following areas for improvement: (1) urgent need to more rapidly identify and remediate vulnerabilities through major process overhaul, (2) robust security testing across AI-generated code, software components, and libraries, and (3) deeper assessment of major AI platforms and services. The source also notes that regulators are requesting access to Claude Mythos itself so financial institutions can use it to defend against the cyberattacks it could enable.
CSO Online