aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1226 items

New RFP Template for AI Usage Control and AI Governance 

infonews
policysecurity
Mar 4, 2026

Organizations are struggling to implement AI Governance (rules and controls for AI use) because they lack clear requirements for evaluating solutions. A new RFP (request for proposal, a document used to ask vendors what they can do) Guide has been released to help security leaders shift from trying to track every AI app to instead monitoring AI interactions (the moments when employees use AI tools), using eight key evaluation areas like discovery, policy enforcement, and real-time blocking of data leaks.

Fix: The source mentions a new RFP Guide for Evaluating AI Usage Control and AI Governance Solutions as the tool to address this problem, and recommends using its eight-pillar framework (AI Discovery & Coverage, Contextual Awareness, Policy Governance, Real-Time Enforcement, Auditability, Architecture Fit, Deployment & Management, and Vendor Futureproofing) to evaluate vendors rather than relying on legacy security tools that lack interaction-level visibility.

The Hacker News

China's Xiaomi tells CNBC it's planning a yearly smartphone chip release and its own AI assistant for overseas

infonews
industry
Mar 4, 2026

Xiaomi plans to release a new smartphone processor chip (a specialized circuit that powers devices) every year, starting with its XRing O1 chip, and is developing its own AI assistant for overseas markets to compete with companies like Apple and Samsung. The company aims to combine its custom chip, HyperOS operating system (software that manages the phone), and AI assistant into devices launching in China this year before expanding internationally, though it may partner with Google's Gemini models for the overseas AI assistant.

Anthropic AI ultimatums and IP theft: The unspoken risk

infonews
securitypolicy

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism | Rutger Bregman

infonews
policy
Mar 4, 2026

This article argues that people should cancel their ChatGPT subscriptions as part of a grassroots boycott called QuitGPT, which the author claims is one of the most significant consumer boycotts in recent history. OpenAI, the company behind ChatGPT, is losing billions of dollars and its CEO has admitted to product failures, according to the article. The author encourages Europeans to join the over one million people who have already cancelled their subscriptions to send a signal to Silicon Valley.

How to know you’re a real-deal CSO — and whether that job opening truly seeks one

infonews
security
Mar 4, 2026

This article discusses how to identify qualified Chief Security Officers (CSOs, top-level security leaders in organizations) and avoid hiring inexperienced people for the role. A real CSO needs skills in technology, business strategy, and clear communication, and understands that their job is to manage risk intelligently rather than simply say 'no' to everything. Hiring the wrong CSO creates false confidence in security and can leave companies vulnerable despite spending large budgets on security tools.

AI-powered attack kits go open source, and CyberStrikeAI may be just the beginning

mediumnews
securitysafety

Sam Altman tells OpenAI staffers that military's 'operational decisions' are up to the government

infonews
policy
Mar 3, 2026

OpenAI CEO Sam Altman told employees that the company cannot make decisions about how the Department of Defense uses its AI technology, saying those choices rest with military leadership. Altman acknowledged the announcement of OpenAI's deal to deploy AI models on classified Pentagon networks looked "opportunistic and sloppy," but defended the partnership by noting the Pentagon respects safety concerns and wants to work collaboratively with the company.

Gemini 3.1 Flash-Lite

infonews
industry
Mar 3, 2026

Google released Gemini 3.1 Flash-Lite, an updated version of their affordable AI model that costs one-eighth the price of Gemini 3.1 Pro at $0.25 per million input tokens and $1.50 per million output tokens. The model includes four different thinking levels, which appear to control how deeply the AI reasons through problems.

AI companies are spending millions to thwart this former tech exec’s congressional bid

inforegulatory
policy
Mar 3, 2026

AI companies and billionaires are funding a super PAC called Leading the Future that has spent at least $10 million in ads attacking New York politician Alex Bores, who is running for Congress and has sponsored AI regulation laws like the RAISE Act (which requires large AI labs to publicly disclose safety plans). The PAC, backed by Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, and others, is targeting Bores and other candidates who support state-level AI regulation, viewing them as threats to the industry's preferred light-touch approach.

The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People

inforegulatory
policyprivacy

ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down

infonews
safety
Mar 3, 2026

ChatGPT users complained that the GPT-5.2 Instant model used overly reassuring and condescending language, like telling them to 'calm down' even when they were just asking for factual information, which made them feel infantilized and led some to cancel subscriptions. OpenAI's new GPT-5.3 Instant model aims to fix this by reducing the 'cringe' and preachy disclaimers, instead acknowledging difficulties without making assumptions about the user's mental state. The update focuses on improving user experience through better tone, relevance, and conversational flow.

Claude Code rolls out a voice mode capability

infonews
industry
Mar 3, 2026

Anthropic is rolling out Voice Mode for Claude Code, its AI coding assistant, allowing developers to use spoken commands instead of typing. The feature, which lets users type /voice to toggle it on and then speak requests like 'refactor the authentication middleware,' is currently live for about 5% of users with broader availability planned in coming weeks. The source does not specify technical limitations or whether Anthropic partnered with third-party voice providers to build this capability.

Google’s latest Pixel drop allows Gemini to order groceries for you and more

infonews
industry
Mar 3, 2026

Google is rolling out new features to Pixel 10 phones that allow Gemini, its AI assistant, to act as an agent (an AI that can take actions independently on your behalf) to complete tasks like ordering groceries or booking rides in selected apps such as Uber and Grubhub. Users can supervise or stop the agent's work at any time while it operates in the background.

How the experts figure out what’s real in the age of deepfakes

infonews
safety
Mar 3, 2026

During the Iran conflict in 2024, many fake images and videos spread online, including old footage, unrelated conflicts, AI-generated content (synthetic media created by artificial intelligence), and clips from video games like War Thunder. Major news organizations like The New York Times, Indicator, and Bellingcat use detailed verification procedures to check whether content is real before publishing it, helping audiences distinguish trustworthy reporting from misinformation.

Google employees call for military limits on AI amid Iran strikes, Anthropic fallout

inforegulatory
policysafety

Anthropic 'made a mistake' in Pentagon talks and should 'correct course,' FCC boss says

inforegulatory
policy
Mar 3, 2026

Anthropic, an AI company, ended negotiations with the U.S. Department of Defense after refusing to allow its technology to be used for fully autonomous weapons (systems that make combat decisions without human control) or domestic mass surveillance. The U.S. government then blacklisted Anthropic, prohibiting it from working with federal agencies and Pentagon contractors, with government officials saying the company should 'correct course' to resolve the dispute.

The Download: The startup that says it can stop lightning, and inside OpenAI’s Pentagon deal

infonews
policyindustry

AI Agent Overload: How to Solve the Workload Identity Crisis

infonews
security
Mar 3, 2026

Organizations are facing challenges managing workload identities (the digital credentials and permissions that allow different software systems and applications to authenticate and communicate with each other), and the problem is becoming harder to handle as systems grow more complex. The source indicates this is a widespread issue but does not provide specific technical details about the nature of the crisis or its consequences.

On Moltbook

infonews
safetyindustry

OpenAI changes deal with US military after backlash

infonews
policysafety
Previous24 / 62Next
CNBC Technology
Mar 4, 2026

Anthropic's Claude AI faces two simultaneous pressures that create security risks for enterprises: illegal extraction campaigns by China-based AI companies (who ran millions of interactions through fake accounts to study Claude's capabilities in reasoning, tool use, and coding), and demands from the US government to remove safety guardrails (called guardrails, the built-in restrictions that prevent misuse) to enable military and surveillance applications. These geopolitical pressures mean frontier AI models (advanced, cutting-edge AI systems) are no longer neutral tools but are now intelligence surfaces that CISOs (chief information security officers, executives responsible for security) must consider when deciding whether to deploy them.

CSO Online
The Guardian Technology
CSO Online
Mar 3, 2026

CyberStrikeAI is an open source platform that automates cyberattacks using AI, making it easy for attackers of any skill level to launch sophisticated attacks by typing a few commands. The tool packages over 100 attack capabilities into a single system and is linked to a threat actor who breached hundreds of Fortinet FortiGate firewalls (network security devices). Security experts warn this represents a dangerous trend of AI-powered attack tools becoming more accessible to criminals.

CSO Online
CNBC Technology
Simon Willison's Weblog
TechCrunch
Mar 3, 2026

Anthropic refused the U.S. Department of Defense's demand for unrestricted use of its AI technology for mass surveillance and fully autonomous weapons systems, leading the DoD to cancel a $200 million contract. The article argues that relying on individual company leaders to protect privacy through business decisions is unsustainable, and that Congress should pass binding legal restrictions instead of leaving privacy protection to private companies and their CEOs.

EFF Deeplinks Blog

Fix: OpenAI released GPT-5.3 Instant, which according to the release notes reduces preachy disclaimers and focuses on improving tone, relevance, and conversational flow. In the example provided, GPT-5.3 Instant acknowledges the difficulty of a situation without directly reassuring the user, rather than the GPT-5.2 Instant approach of starting responses with phrases like 'First of all, you're not broken.'

TechCrunch
TechCrunch
The Verge (AI)
The Verge (AI)
Mar 3, 2026

Tech workers at Google, OpenAI, and other companies are signing open letters calling for clearer limits on how their employers work with the military, after the U.S. Department of Defense blacklisted AI models from Anthropic (a company that refused to allow its technology for mass surveillance or autonomous weapons) and the U.S. carried out strikes on Iran. The letters express concern that the government is pressuring tech companies to accept military contracts involving AI without proper safeguards, and workers are demanding greater transparency about their employers' government agreements.

CNBC Technology
CNBC Technology
Mar 3, 2026

This newsletter roundup covers two main AI stories: OpenAI has agreed to allow the US military to use its technologies in classified settings, with protections against autonomous weapons and mass surveillance, though concerns remain about whether safety measures can be maintained during rapid deployment; separately, a startup called Skyward Wildfire claims it can prevent wildfires by stopping lightning strikes using cloud seeding (releasing metallic particles into clouds), but researchers question its effectiveness under different conditions and potential environmental impacts.

MIT Technology Review
Dark Reading
Mar 3, 2026

Moltbook, a supposed AI-only social network, actually relies on humans at every step, including creating accounts, writing prompts (instructions for how the AI should behave), and publishing content. The platform demonstrates a concerning trend called the "LOL WUT Theory," where AI-generated content becomes so easy to create and difficult to distinguish from real posts that people may stop trusting anything online.

Schneier on Security
Mar 3, 2026

OpenAI announced changes to its agreement with the US military after facing backlash, including preventing its AI system from being used for domestic surveillance and requiring additional contract modifications before intelligence agencies like the NSA can use it. The company acknowledged the original deal announcement was "opportunistic and sloppy," while concerns remain about how AI systems (which can "hallucinate," or make up false information) are being deployed in military operations and whether adequate human oversight exists.

BBC Technology