New tools, products, platforms, funding rounds, and company developments in AI security.
OpenClaw is an open-source AI assistant platform created by Peter Steinberger that has gained popularity in the tech industry. The article describes a fan convention called ClawCon held in Manhattan to celebrate the platform and its community.
London Mayor Sadiq Khan invited AI company Anthropic to expand in the city after the U.S. Pentagon designated it a supply chain risk (a label meaning the government views the company as not secure enough to work with) because Anthropic refused to give defense agencies unrestricted access to its AI tools and raised concerns about using its Claude model for mass surveillance or autonomous military targeting. The company plans to challenge the Pentagon's designation in court, and Microsoft announced it would continue using Anthropic's technology except for the U.S. Department of Defense.
The U.S. Department of Defense designated Anthropic (maker of Claude AI) as a supply-chain risk after the company refused to provide unrestricted access for military applications like mass surveillance and autonomous weapons. Microsoft, Google, and AWS confirmed that Claude will remain available to non-defense customers through their platforms, and the designation only restricts direct Department of Defense use, not broader commercial applications.
The article discusses how generative AI (AI systems that can create new text, images, or other content) is being rapidly integrated into many areas of life, but both supporters and critics use exaggerated language that makes it hard to understand what AI actually does and how it works. A documentary called 'The AI Doc' attempts to clarify the current state of AI development by examining perspectives from both optimistic and pessimistic viewpoints.
Claude, an AI chatbot made by Anthropic, is gaining users rapidly on mobile devices after the company's leadership refused to let the Pentagon use it for mass surveillance or autonomous weapons. Claude's daily active users on phones reached 11.3 million in early March, up 183% since the start of the year, and the app became the top-ranked app in the U.S. and 15 other countries, with over 1 million new sign-ups per day.
The Pentagon's chief technology officer reported disagreement with AI company Anthropic regarding autonomous warfare (military systems that can make decisions and take actions with minimal human control). The military is working on procedures to allow varying degrees of autonomy based on the level of risk involved in different situations.
Anthropic used Claude Opus 4.6 (a large language model, or LLM, which is an AI trained on vast amounts of text to understand and generate language) to find 22 security vulnerabilities in Firefox, including 14 classified as high-severity. The AI model discovered these bugs by scanning nearly 6,000 C++ files in just two weeks, demonstrating that AI can be effective at identifying security flaws in complex software.
Fix: Most issues have been fixed in Firefox 148, with the remainder to be fixed in upcoming releases. Additionally, Anthropic developed Claude Code Security, which uses an AI agent to automatically generate patches for vulnerabilities; the company uses task verifiers (tools that check if a proposed fix actually works) to gain confidence that patches fix the specific vulnerability while maintaining the program's normal functionality.
The Hacker NewsThe Trump administration released a cybersecurity strategy that emphasizes offensive cyber operations (proactive attacks on adversary networks rather than waiting to respond to attacks), deregulation of industry rules, and AI adoption. The strategy outlines six pillars including disrupting adversaries, reducing regulations, modernizing government networks with zero-trust architecture (a security model that doesn't automatically trust any user or device), and securing critical infrastructure like power grids and hospitals.
Palantir's stock rallied 15% this week after the U.S. attacked Iran, because the company relies on government spending for about 60% of its revenue and works heavily with military and intelligence agencies. Wall Street showed little concern about the U.S. government blacklisting Anthropic (an AI company that had partnered with Palantir on defense projects), as analysts noted there are alternative AI models available and that replacing Anthropic's systems will take time but is manageable.
Amazon announced that AWS customers can continue using Anthropic's Claude AI models for all work except Department of Defense projects, after the federal government labeled Anthropic a "supply chain risk." Anthropic says it will challenge this designation in court, and major cloud providers (Amazon, Microsoft, and Google) are helping customers transition to alternative AI models for defense-related work.
Google and Microsoft announced they will continue offering Anthropic's Claude AI models to their cloud customers for non-defense work, after the U.S. Defense Department designated Anthropic as a supply chain risk (a company that poses potential security or operational threats to government operations). The announcements came after the Trump administration instructed federal agencies to stop using Anthropic's technology, but the companies determined that non-defense projects are still permitted under this designation.
The Pentagon and AI companies are in a dispute over whether existing U.S. law allows the government to use AI to analyze bulk commercial data collected from Americans for surveillance purposes. Legal experts point out that current law has a major gap: public information, commercial data (like location and browsing records), and information accidentally collected during foreign surveillance are not legally considered "surveillance," so the government can use them without warrants or court orders, even as AI makes this surveillance much more powerful than before.
Anthropic used Claude Opus 4.6 (an advanced AI model) to test Firefox's code and discovered 22 vulnerabilities, including 14 severe ones, over two weeks. Most of these bugs have already been fixed in Firefox 148 released in February, though some fixes will come in a later update. The AI was much better at finding security problems than creating working exploits to demonstrate them.
Fix: Most vulnerabilities have been fixed in Firefox 148 (released February). A few remaining fixes will be addressed in the next release.
TechCrunch (Security)Anthropic and the Pentagon failed to agree on how much control the military should have over Anthropic's AI models, particularly regarding use in autonomous weapons and mass surveillance, causing a $200 million contract to fall apart and leading the Pentagon to designate Anthropic a supply-chain risk (a category indicating potential security or reliability concerns). The Department of Defense then turned to OpenAI instead, which accepted the contract, though this decision led to a significant surge in ChatGPT uninstalls. The situation raises an important question about balancing national security needs with responsible AI deployment.
The UN and AI companies are debating who should control how artificial intelligence is used in military contexts, especially after the US military's use of AI in the Iran crisis. AI company Anthropic refused to remove safeguards (safety features built into their AI) that would prevent the US Department of Defense from using its technology for mass surveillance or autonomous lethal weapons (weapons that can select and fire at targets without human control), while OpenAI later agreed to work with the Pentagon despite similar concerns. The article emphasizes that decisions about military AI use raise urgent questions about democratic oversight and international controls, rather than leaving these choices solely to companies or governments.
CISOs (chief information security officers, the executives responsible for an organization's cybersecurity) and corporate boards spend only about 30 minutes per quarter discussing cyber risk, and these conversations lack depth and strategic engagement. The report found that while 95% of CISOs report to their boards regularly, most discussions are brief check-ins rather than collaborative problem-solving, and boards want better insight into emerging threats like AI-driven attacks (attacks powered by artificial intelligence).
Anthropic and other major AI companies are competing in a market where their AI models have similar performance levels, with only small quality improvements appearing every few months. In this competitive environment, Anthropic is trying to stand out by branding itself as the most ethical and trustworthy AI provider, which gives it value with both individual users and large organizations.
Anthropic lost a US Department of Defense contract after refusing to let the Pentagon use its AI models for mass surveillance or fully autonomous weapons (systems that make kill decisions without human input), while OpenAI secured the contract by agreeing to provide classified government systems with AI. The article argues this outcome may benefit Anthropic by reinforcing its brand as a trustworthy, ethical AI provider in a competitive market where different AI models perform similarly.
Threat actors are using AI and language models as operational tools to speed up cyberattacks across all stages, from creating phishing emails to generating malware code, while human attackers maintain control over targeting and deployment decisions. Emerging experiments with agentic AI (where models make iterative decisions with minimal human input) suggest attackers may develop more adaptive and harder-to-detect tactics in the future. Microsoft reports disrupting thousands of fraudulent accounts and partnering with industry to counter AI-enabled threats through technical protections and responsible AI practices.
OpenAI signed a deal with the U.S. Department of Defense to provide AI tools after rival Anthropic refused, sparking criticism and a 300% spike in ChatGPT uninstalls. The company added contract language stating the AI won't be used for domestic surveillance of U.S. citizens, but critics argue the agreement contains vague 'weasel words' (deliberately ambiguous phrases that allow one side to avoid accountability) like 'intentionally,' 'deliberately,' and 'unconstrained' that the government can interpret loosely to justify mass surveillance anyway.