New tools, products, platforms, funding rounds, and company developments in AI security.
Organizations are struggling to implement AI Governance (rules and controls for AI use) because they lack clear requirements for evaluating solutions. A new RFP (request for proposal, a document used to ask vendors what they can do) Guide has been released to help security leaders shift from trying to track every AI app to instead monitoring AI interactions (the moments when employees use AI tools), using eight key evaluation areas like discovery, policy enforcement, and real-time blocking of data leaks.
Fix: The source mentions a new RFP Guide for Evaluating AI Usage Control and AI Governance Solutions as the tool to address this problem, and recommends using its eight-pillar framework (AI Discovery & Coverage, Contextual Awareness, Policy Governance, Real-Time Enforcement, Auditability, Architecture Fit, Deployment & Management, and Vendor Futureproofing) to evaluate vendors rather than relying on legacy security tools that lack interaction-level visibility.
The Hacker NewsXiaomi plans to release a new smartphone processor chip (a specialized circuit that powers devices) every year, starting with its XRing O1 chip, and is developing its own AI assistant for overseas markets to compete with companies like Apple and Samsung. The company aims to combine its custom chip, HyperOS operating system (software that manages the phone), and AI assistant into devices launching in China this year before expanding internationally, though it may partner with Google's Gemini models for the overseas AI assistant.
This article argues that people should cancel their ChatGPT subscriptions as part of a grassroots boycott called QuitGPT, which the author claims is one of the most significant consumer boycotts in recent history. OpenAI, the company behind ChatGPT, is losing billions of dollars and its CEO has admitted to product failures, according to the article. The author encourages Europeans to join the over one million people who have already cancelled their subscriptions to send a signal to Silicon Valley.
This article discusses how to identify qualified Chief Security Officers (CSOs, top-level security leaders in organizations) and avoid hiring inexperienced people for the role. A real CSO needs skills in technology, business strategy, and clear communication, and understands that their job is to manage risk intelligently rather than simply say 'no' to everything. Hiring the wrong CSO creates false confidence in security and can leave companies vulnerable despite spending large budgets on security tools.
OpenAI CEO Sam Altman told employees that the company cannot make decisions about how the Department of Defense uses its AI technology, saying those choices rest with military leadership. Altman acknowledged the announcement of OpenAI's deal to deploy AI models on classified Pentagon networks looked "opportunistic and sloppy," but defended the partnership by noting the Pentagon respects safety concerns and wants to work collaboratively with the company.
Google released Gemini 3.1 Flash-Lite, an updated version of their affordable AI model that costs one-eighth the price of Gemini 3.1 Pro at $0.25 per million input tokens and $1.50 per million output tokens. The model includes four different thinking levels, which appear to control how deeply the AI reasons through problems.
AI companies and billionaires are funding a super PAC called Leading the Future that has spent at least $10 million in ads attacking New York politician Alex Bores, who is running for Congress and has sponsored AI regulation laws like the RAISE Act (which requires large AI labs to publicly disclose safety plans). The PAC, backed by Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, and others, is targeting Bores and other candidates who support state-level AI regulation, viewing them as threats to the industry's preferred light-touch approach.
ChatGPT users complained that the GPT-5.2 Instant model used overly reassuring and condescending language, like telling them to 'calm down' even when they were just asking for factual information, which made them feel infantilized and led some to cancel subscriptions. OpenAI's new GPT-5.3 Instant model aims to fix this by reducing the 'cringe' and preachy disclaimers, instead acknowledging difficulties without making assumptions about the user's mental state. The update focuses on improving user experience through better tone, relevance, and conversational flow.
Anthropic is rolling out Voice Mode for Claude Code, its AI coding assistant, allowing developers to use spoken commands instead of typing. The feature, which lets users type /voice to toggle it on and then speak requests like 'refactor the authentication middleware,' is currently live for about 5% of users with broader availability planned in coming weeks. The source does not specify technical limitations or whether Anthropic partnered with third-party voice providers to build this capability.
Google is rolling out new features to Pixel 10 phones that allow Gemini, its AI assistant, to act as an agent (an AI that can take actions independently on your behalf) to complete tasks like ordering groceries or booking rides in selected apps such as Uber and Grubhub. Users can supervise or stop the agent's work at any time while it operates in the background.
During the Iran conflict in 2024, many fake images and videos spread online, including old footage, unrelated conflicts, AI-generated content (synthetic media created by artificial intelligence), and clips from video games like War Thunder. Major news organizations like The New York Times, Indicator, and Bellingcat use detailed verification procedures to check whether content is real before publishing it, helping audiences distinguish trustworthy reporting from misinformation.
Anthropic, an AI company, ended negotiations with the U.S. Department of Defense after refusing to allow its technology to be used for fully autonomous weapons (systems that make combat decisions without human control) or domestic mass surveillance. The U.S. government then blacklisted Anthropic, prohibiting it from working with federal agencies and Pentagon contractors, with government officials saying the company should 'correct course' to resolve the dispute.
Organizations are facing challenges managing workload identities (the digital credentials and permissions that allow different software systems and applications to authenticate and communicate with each other), and the problem is becoming harder to handle as systems grow more complex. The source indicates this is a widespread issue but does not provide specific technical details about the nature of the crisis or its consequences.
Anthropic's Claude AI faces two simultaneous pressures that create security risks for enterprises: illegal extraction campaigns by China-based AI companies (who ran millions of interactions through fake accounts to study Claude's capabilities in reasoning, tool use, and coding), and demands from the US government to remove safety guardrails (called guardrails, the built-in restrictions that prevent misuse) to enable military and surveillance applications. These geopolitical pressures mean frontier AI models (advanced, cutting-edge AI systems) are no longer neutral tools but are now intelligence surfaces that CISOs (chief information security officers, executives responsible for security) must consider when deciding whether to deploy them.
CyberStrikeAI is an open source platform that automates cyberattacks using AI, making it easy for attackers of any skill level to launch sophisticated attacks by typing a few commands. The tool packages over 100 attack capabilities into a single system and is linked to a threat actor who breached hundreds of Fortinet FortiGate firewalls (network security devices). Security experts warn this represents a dangerous trend of AI-powered attack tools becoming more accessible to criminals.
Anthropic refused the U.S. Department of Defense's demand for unrestricted use of its AI technology for mass surveillance and fully autonomous weapons systems, leading the DoD to cancel a $200 million contract. The article argues that relying on individual company leaders to protect privacy through business decisions is unsustainable, and that Congress should pass binding legal restrictions instead of leaving privacy protection to private companies and their CEOs.
Fix: OpenAI released GPT-5.3 Instant, which according to the release notes reduces preachy disclaimers and focuses on improving tone, relevance, and conversational flow. In the example provided, GPT-5.3 Instant acknowledges the difficulty of a situation without directly reassuring the user, rather than the GPT-5.2 Instant approach of starting responses with phrases like 'First of all, you're not broken.'
TechCrunchTech workers at Google, OpenAI, and other companies are signing open letters calling for clearer limits on how their employers work with the military, after the U.S. Department of Defense blacklisted AI models from Anthropic (a company that refused to allow its technology for mass surveillance or autonomous weapons) and the U.S. carried out strikes on Iran. The letters express concern that the government is pressuring tech companies to accept military contracts involving AI without proper safeguards, and workers are demanding greater transparency about their employers' government agreements.
This newsletter roundup covers two main AI stories: OpenAI has agreed to allow the US military to use its technologies in classified settings, with protections against autonomous weapons and mass surveillance, though concerns remain about whether safety measures can be maintained during rapid deployment; separately, a startup called Skyward Wildfire claims it can prevent wildfires by stopping lightning strikes using cloud seeding (releasing metallic particles into clouds), but researchers question its effectiveness under different conditions and potential environmental impacts.
Moltbook, a supposed AI-only social network, actually relies on humans at every step, including creating accounts, writing prompts (instructions for how the AI should behave), and publishing content. The platform demonstrates a concerning trend called the "LOL WUT Theory," where AI-generated content becomes so easy to create and difficult to distinguish from real posts that people may stop trusting anything online.
OpenAI announced changes to its agreement with the US military after facing backlash, including preventing its AI system from being used for domestic surveillance and requiring additional contract modifications before intelligence agencies like the NSA can use it. The company acknowledged the original deal announcement was "opportunistic and sloppy," while concerns remain about how AI systems (which can "hallucinate," or make up false information) are being deployed in military operations and whether adequate human oversight exists.