All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
OpenAI CEO Sam Altman acknowledged that the company rushed into a deal with the U.S. Department of Defense, calling it "opportunistic and sloppy," after public backlash over the timing and terms. The company plans to amend the contract to add safeguards, including language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," and will work with the Pentagon on technical protections for their AI tools.
Fix: OpenAI will amend the contract to include new language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." The company also stated it would work with the Pentagon on technical safeguards, and Altman affirmed that the Defense Department had confirmed OpenAI's tools would not be used by intelligence agencies such as the NSA.
CNBC TechnologyThe OpenClaw macOS beta onboarding flow had a security flaw where it exposed a PKCE code_verifier (a secret token used in OAuth, a system for secure login) by putting it in the OAuth state parameter, which could be seen in URLs. This vulnerability only affected the macOS beta app's login process, not other parts of the software.
OpenClaw had a security flaw in its hook authentication rate limiter (the system that limits how many times someone can try to log in) where IPv4 addresses and IPv4-mapped IPv6 addresses (the newer internet protocol format that can represent older addresses like ::ffff:1.2.3.4) of the same client were counted separately, allowing attackers to double their brute-force attempts from 20 to 40 per minute by using both address forms.
A WordPress plugin called 'AI ChatBot with ChatGPT and Content Generator by AYS' has a security flaw in versions up to 2.7.5 where missing authorization checks (verification that a user has permission to perform an action) allow attackers without accounts to view, modify, or delete the plugin's ChatGPT API key (a secret code needed to use OpenAI's service). The vulnerability was partially fixed in version 2.7.5 and fully fixed in version 2.7.6.
Hackers are using CyberStrikeAI, an open-source AI security testing platform, to automate attacks against network devices like firewalls. The tool combines over 100 security utilities with an AI decision engine (compatible with GPT, Claude, and DeepSeek models) to automatically scan networks, find vulnerabilities, and execute attacks with minimal hacker skill required. Researchers warn this represents a growing threat as adversaries adopt AI-powered orchestration engines (systems that coordinate multiple tools automatically) to target exposed network equipment.
ChatGPT's mobile app uninstalls surged 295% after OpenAI announced a partnership with the U.S. Department of Defense, while competitor Anthropic's Claude app saw downloads jump 37-51% after publicly declining a similar defense partnership over concerns about AI being used for surveillance and autonomous weapons. The shift in user preference was reflected in app store rankings, with Claude reaching the number one position and ChatGPT receiving a sharp increase in negative reviews.
OpenClaw Gateway had two security flaws that could let an attacker with a valid token escalate their access: the HTTP endpoint (`POST /tools/invoke`, a web interface for running tools) didn't block dangerous tools like session spawning by default, and the permission system could auto-approve risky operations without enough user confirmation. Together, these could allow an attacker to execute commands or control sessions if they reach the Gateway.
Stripe released a preview feature that helps AI startups automatically bill their customers for AI model usage (tokens, which are units of text that AI models process) and add a profit margin on top of the underlying costs. For example, a startup can charge customers 30% more than what it pays to access models from providers like OpenAI or Google, with Stripe automating the tracking and billing process across multiple AI models and third-party gateways.
OpenAI won a Pentagon contract that Anthropic refused, sparking public backlash over concerns about the company's involvement in mass surveillance and automated weaponry. The situation highlights that as AI companies become part of national security infrastructure, neither the companies nor the government appear ready to manage the ethical and policy challenges this creates, particularly around who should have power over these decisions.
A critical vulnerability in OpenClaw, a popular AI tool used by developers, has been discovered and patched. The flaw is part of a pattern of security problems affecting this rapidly-adopted AI agent (a software system that can perform tasks autonomously).
OpenClaw's canvas tool contains a path traversal vulnerability (a security flaw that allows reading files outside intended directories) in its `a2ui_push` action. An authenticated attacker can supply any filesystem path to the `jsonlPath` parameter, and the gateway reads the file without validation and forwards its contents to connected nodes, potentially exposing sensitive files like credentials or SSH keys.
Anthropic has updated Claude to make switching from other AI chatbots easier by adding memory features to the free plan and creating tools to import user data from competitors like ChatGPT and Gemini. These updates let users transfer the context and conversation history their previous AI already knows about them, so they don't have to re-teach Claude the same information.
OpenChatBI has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories) in its save_report tool because it doesn't properly validate the file_format parameter, allowing attackers to use sequences like '/../' to write files to arbitrary locations and potentially execute malicious code.
CVE-2026-2256 is a command injection vulnerability (a flaw where an attacker tricks a program into running unwanted operating system commands) in ModelScope's ms-agent software versions v1.6.0rc1 and earlier. An attacker can exploit this by sending specially crafted prompts to execute arbitrary commands on the affected system.
Claude, an AI model made by Anthropic, became more popular after the Pentagon rejected it due to ethics concerns and chose OpenAI's ChatGPT instead for classified military networks. Claude reached the top spot on Apple's US app store chart shortly after this decision, showing that public interest in the model increased following the military conflict.
Apple is exploring using Google's servers to store data for an upgraded version of Siri that runs on Google's Gemini AI models (a large language model created by Google). This represents a deeper partnership between Apple and Google than previously announced, as Apple works to catch up in AI capabilities while maintaining its privacy standards.
Many users are switching from ChatGPT to Claude, an AI assistant made by Anthropic, following controversies over OpenAI's partnership with the Pentagon for potential military use. Claude has surged in popularity, with the company reporting record sign-ups and a 60% jump in free users since January. The article provides a guide for switching, including how to export your ChatGPT data, import it into Claude, and permanently delete your ChatGPT account.
Fix: OpenClaw removed Anthropic OAuth sign-in from macOS onboarding and replaced it with setup-token-only authentication. The fix is available in patched version 2026.2.25.
GitHub Advisory DatabaseFix: The fix involves centralizing and reusing a single canonical client-IP normalization system for auth rate-limiting and using that standardized IP format as the key for hook auth throttling. This issue is patched in version 2026.2.22 of the openclaw npm package (fix commit 3284d2eb227e7b6536d543bcf5c3e320bc9d13c5).
GitHub Advisory DatabaseFix: Update the plugin to version 2.7.6 or later, where the vulnerability was fully fixed.
NVD/CVE DatabaseMultiple Qualcomm chipsets have a memory corruption vulnerability (a bug where data in computer memory gets corrupted or overwritten) that occurs during memory alignment operations, which are critical for how systems organize data. This vulnerability is actively being exploited by attackers in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesBroadcom VMware Aria Operations contains a command injection vulnerability (a flaw that lets attackers insert malicious commands into the software) that allows unauthenticated attackers (those without login credentials) to execute arbitrary commands and potentially gain remote code execution (the ability to run any code on the system from a distance) during product migration support. This vulnerability is currently being actively exploited by attackers.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesFix: Update to OpenClaw version 2026.2.14 or later. The fix includes: denying high-risk tools over HTTP by default (with configuration overrides available via `gateway.tools.{allow,deny}`), requiring explicit prompts for any non-read/search permissions in the ACP (access control permission) system, adding security warnings when high-risk tools are re-enabled, and making permission matching stricter to prevent accidental auto-approvals. Additionally, keep the Gateway loopback-only (only accessible locally) by setting `gateway.bind="loopback"` or using `openclaw gateway run --bind loopback`, and avoid exposing it directly to the internet without using an SSH tunnel or Tailscale.
GitHub Advisory DatabaseFix: The vulnerability has been patched. No specific version number or patching instructions are provided in the source text.
Dark ReadingFix: Upgrade to version 0.2.2 or later, which includes the fix from PR #12.
GitHub Advisory DatabaseFix: To transfer your data from ChatGPT to Claude: (1) In ChatGPT Settings, go to Personalization > Memory > Manage to review and copy your stored preferences, or go to Settings > Data Controls > Export Data to download your chat history as text or JSON files. (2) In Claude, go to Settings > Capabilities and turn on Memory. (3) Start a new conversation and paste your information using a prompt like 'Here's some important context I'd like you to remember. Update your memory about me with this.' or ask Claude to 'Review this and summarize my key preferences' for exported chat files. (4) To delete your ChatGPT account completely: go to Settings > Personalization > Memory and delete stored memory, type 'Delete all my memory and personalized data' in a final chat command, then navigate to account management settings to delete your account entirely.
TechCrunchOpenAI announced a deal allowing the US military to use its AI technology in classified settings, claiming it includes protections against autonomous weapons and mass surveillance, unlike Anthropic's rejected negotiations. However, legal experts note that OpenAI's agreement relies on the assumption that the government will follow existing laws and policies, rather than giving the Pentagon explicit prohibitions like Anthropic had proposed, meaning the military can still use the technology for any lawful purpose.