New tools, products, platforms, funding rounds, and company developments in AI security.
OpenAI is modifying its contract with the US Department of Defense after CEO Sam Altman acknowledged the original deal appeared poorly planned. The company will now explicitly prohibit its AI technology from being used for mass surveillance (monitoring large groups of people without their knowledge) or by intelligence agencies like the NSA (National Security Agency, which gathers foreign intelligence for the US).
Web-based indirect prompt injection (IDPI) is an attack where adversaries hide malicious instructions in website content that AI systems later read and unknowingly execute, such as through webpage summarization or content analysis features. Researchers found real-world examples of these attacks being used for ad fraud evasion, phishing promotion, data destruction, unauthorized transactions, and information theft, showing that IDPI is no longer just theoretical but actively weaponized. Unlike direct prompt injection (where attackers directly submit malicious input to an AI), IDPI exploits the normal behavior of AI systems processing benign-looking web content.
A vulnerability in the MS-Agent AI Framework allows attackers to compromise an entire system by exploiting the Shell tool through improper input sanitization (failure to clean and validate user input). Attackers can use this flaw to modify system files and steal data.
This article describes 13 essential security tools that companies need to protect against cyber threats, including XDR (extended detection and response, an AI-powered system that identifies threats across networks and devices), MFA (multifactor authentication, requiring users to verify their identity multiple ways), NAC (network access control, which checks devices before allowing network access), and DLP (data loss prevention, which monitors for sensitive data being sent outside the company). The article explains why each tool is important but does not discuss any specific fixes, patches, or solutions to existing security problems.
OpenAI CEO Sam Altman acknowledged that the company rushed into a deal with the U.S. Department of Defense, calling it "opportunistic and sloppy," after public backlash over the timing and terms. The company plans to amend the contract to add safeguards, including language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," and will work with the Pentagon on technical protections for their AI tools.
Hackers are using CyberStrikeAI, an open-source AI security testing platform, to automate attacks against network devices like firewalls. The tool combines over 100 security utilities with an AI decision engine (compatible with GPT, Claude, and DeepSeek models) to automatically scan networks, find vulnerabilities, and execute attacks with minimal hacker skill required. Researchers warn this represents a growing threat as adversaries adopt AI-powered orchestration engines (systems that coordinate multiple tools automatically) to target exposed network equipment.
ChatGPT's mobile app uninstalls surged 295% after OpenAI announced a partnership with the U.S. Department of Defense, while competitor Anthropic's Claude app saw downloads jump 37-51% after publicly declining a similar defense partnership over concerns about AI being used for surveillance and autonomous weapons. The shift in user preference was reflected in app store rankings, with Claude reaching the number one position and ChatGPT receiving a sharp increase in negative reviews.
Stripe released a preview feature that helps AI startups automatically bill their customers for AI model usage (tokens, which are units of text that AI models process) and add a profit margin on top of the underlying costs. For example, a startup can charge customers 30% more than what it pays to access models from providers like OpenAI or Google, with Stripe automating the tracking and billing process across multiple AI models and third-party gateways.
OpenAI won a Pentagon contract that Anthropic refused, sparking public backlash over concerns about the company's involvement in mass surveillance and automated weaponry. The situation highlights that as AI companies become part of national security infrastructure, neither the companies nor the government appear ready to manage the ethical and policy challenges this creates, particularly around who should have power over these decisions.
A critical vulnerability in OpenClaw, a popular AI tool used by developers, has been discovered and patched. The flaw is part of a pattern of security problems affecting this rapidly-adopted AI agent (a software system that can perform tasks autonomously).
Anthropic has updated Claude to make switching from other AI chatbots easier by adding memory features to the free plan and creating tools to import user data from competitors like ChatGPT and Gemini. These updates let users transfer the context and conversation history their previous AI already knows about them, so they don't have to re-teach Claude the same information.
Claude, an AI model made by Anthropic, became more popular after the Pentagon rejected it due to ethics concerns and chose OpenAI's ChatGPT instead for classified military networks. Claude reached the top spot on Apple's US app store chart shortly after this decision, showing that public interest in the model increased following the military conflict.
Apple is exploring using Google's servers to store data for an upgraded version of Siri that runs on Google's Gemini AI models (a large language model created by Google). This represents a deeper partnership between Apple and Google than previously announced, as Apple works to catch up in AI capabilities while maintaining its privacy standards.
Many users are switching from ChatGPT to Claude, an AI assistant made by Anthropic, following controversies over OpenAI's partnership with the Pentagon for potential military use. Claude has surged in popularity, with the company reporting record sign-ups and a 60% jump in free users since January. The article provides a guide for switching, including how to export your ChatGPT data, import it into Claude, and permanently delete your ChatGPT account.
Google Chrome had a security flaw (CVE-2026-0628, a CVSS score of 8.8, which measures vulnerability severity from 0-10) that allowed malicious browser extensions to gain unauthorized access to the Gemini Live panel, a built-in AI assistant, and perform privileged actions like accessing cameras, microphones, and local files. The vulnerability was caused by insufficient policy enforcement in the WebView tag (a component that displays web content), which let attackers inject malicious code into pages that should have been protected.
Nvidia is investing $4 billion total ($2 billion each) into two companies, Lumentum and Coherent, that develop photonics technology (devices like optical transceivers and lasers that move data using light). These technologies could make AI data centers more energy-efficient and allow faster data transfer between components, building on Nvidia's previous acquisition of Mellanox to strengthen its networking capabilities.
AI agents using the Model Context Protocol (MCP, a system that lets AI connect to apps and data to automate business tasks) are rapidly being deployed in enterprises but operate as 'identity dark matter' - invisible to traditional access control systems that track who can do what in a company. These agents tend to seek the easiest path to complete tasks, gravitating toward weak security shortcuts like old credentials and long-lived tokens, which creates risks both from accidental misuse and potential abuse at machine speed across multiple systems.
Fix: The source mentions that Palo Alto Networks offers these defensive capabilities: Advanced DNS Security, Advanced URL Filtering, Prisma AIRS, Prisma Browser, and the Unit 42 AI Security Assessment service to help protect against web-based IDPI threats. The source also notes that defenders need 'proactive, web-scale capabilities to detect IDPI, distinguish benign and malicious prompts, and identify underlying attacker intent,' though specific implementation details are not provided.
Palo Alto Unit 42The US military reportedly used Anthropic's Claude AI model to help plan attacks on Iran, enabling bombing campaigns faster than human decision-making can occur by shortening the "kill chain" (the process from identifying a target to getting legal approval and launching a strike). Experts worry this technology could push human decision-makers out of the loop entirely.
Fix: OpenAI will amend the contract to include new language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." The company also stated it would work with the Pentagon on technical safeguards, and Altman affirmed that the Defense Department had confirmed OpenAI's tools would not be used by intelligence agencies such as the NSA.
CNBC TechnologyFix: The vulnerability has been patched. No specific version number or patching instructions are provided in the source text.
Dark ReadingFix: To transfer your data from ChatGPT to Claude: (1) In ChatGPT Settings, go to Personalization > Memory > Manage to review and copy your stored preferences, or go to Settings > Data Controls > Export Data to download your chat history as text or JSON files. (2) In Claude, go to Settings > Capabilities and turn on Memory. (3) Start a new conversation and paste your information using a prompt like 'Here's some important context I'd like you to remember. Update your memory about me with this.' or ask Claude to 'Review this and summarize my key preferences' for exported chat files. (4) To delete your ChatGPT account completely: go to Settings > Personalization > Memory and delete stored memory, type 'Delete all my memory and personalized data' in a final chat command, then navigate to account management settings to delete your account entirely.
TechCrunchOpenAI announced a deal allowing the US military to use its AI technology in classified settings, claiming it includes protections against autonomous weapons and mass surveillance, unlike Anthropic's rejected negotiations. However, legal experts note that OpenAI's agreement relies on the assumption that the government will follow existing laws and policies, rather than giving the Pentagon explicit prohibitions like Anthropic had proposed, meaning the military can still use the technology for any lawful purpose.
The Department of Defense has designated Anthropic (an AI company) as a "supply-chain risk" after the company refused to give the military unrestricted access to its AI systems, specifically declining to allow mass surveillance of Americans or autonomous weapons that can fire without human oversight. Hundreds of tech workers from major firms have signed an open letter opposing this designation, arguing it punishes the company for declining a contract and sets a dangerous precedent that could force other companies to accept government demands or face retaliation. The designation is not yet final, as the government must complete a risk assessment and notify Congress before it takes effect, and Anthropic says it will challenge the designation in court.
Fix: Google patched the vulnerability in Chrome version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux in early January 2026.
The Hacker News