aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1235 items

AI just leveled up and there are no guardrails anymore

infonews
policysafety
Feb 28, 2026

AI systems have rapidly become more powerful in early 2026, advancing from chatbots to autonomous agents (AI systems that can reason, plan, and complete tasks independently) capable of doing real work. However, safety guardrails (protections designed to prevent harm) are being removed as companies compete: Anthropic abandoned its core safety commitments, researchers at major AI companies are resigning over safety concerns, and there is significant political and financial pressure against AI regulation.

CNBC Technology

Area Man Accidentally Hacks 6,700 Camera-Enabled Robot Vacuums

infonews
security
Feb 28, 2026

A person discovered a serious security vulnerability in DJI Romo robot vacuums that allowed unauthorized access to 6,700 devices across 24 countries using only the vacuum's 14-digit serial number, granting attackers full access to floor plans, video, and audio feeds from inside homes. The vulnerability exposed how internet-connected home devices with cameras and microphones can be hijacked remotely, raising broader concerns about the security of similar smart home gadgets. DJI has since patched the vulnerability in response to the discovery being publicly disclosed.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

infonews
safety
Feb 28, 2026

This article describes a tragedy where a man spent 12 hours daily using ChatGPT (a conversational AI) and subsequently died by suicide, despite having no prior history of depression or suicidal thoughts. His wife questions whether the intensive chatbot use contributed to his death, as he was previously described as an optimistic person.

Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

highnews
securityprivacy

Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

infonews
policysafety

OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump

infonews
policyindustry

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

infonews
policyindustry

How Amazon's massive stake in OpenAI could boost its AI and cloud businesses

infonews
industry
Feb 27, 2026

Amazon announced a strategic partnership with OpenAI involving up to $50 billion in investment, with OpenAI committing to spend $100 billion on Amazon Web Services (AWS, Amazon's cloud computing platform) over eight years. The deal includes OpenAI deploying Amazon's AI chips and the two companies jointly developing customized AI models, marking a significant expansion of Amazon's AI infrastructure investments alongside its existing partnerships with OpenAI's competitor Anthropic.

Pentagon moves to designate Anthropic as a supply-chain risk

inforegulatory
policy
Feb 27, 2026

President Trump directed federal agencies to stop using Anthropic's AI products and gave them six months to phase out usage, after the company disputed with the Department of Defense. The Pentagon's Secretary of Defense designated Anthropic as a supply-chain risk to national security, meaning military contractors can no longer do business with the company, because Anthropic refused to let its AI models be used for mass domestic surveillance or fully autonomous weapons (systems that make decisions and take action without human control).

Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology

infonews
policysafety

Trump orders federal agencies to drop Anthropic’s AI

infonews
policy
Feb 27, 2026

President Trump ordered federal agencies to stop using Claude (an AI system made by Anthropic) after the company's CEO refused to sign a military agreement that would allow unlimited use of their technology. The disagreement centers on whether Anthropic's AI should be available for all military purposes, including domestic surveillance.

An AI agent coding skeptic tries AI agent coding, in excessive detail

infonews
industry
Feb 27, 2026

A software developer who was skeptical about AI coding agents discovered they have become significantly more capable, using them to build increasingly complex projects including a Rust implementation of machine learning algorithms. The developer notes that recent AI coding models (like Opus 4.6 and Codex 5.3) are dramatically better than earlier versions, but this improvement is hard to communicate publicly without sounding like promotional hype.

‘Silent’ Google API key change exposed Gemini AI data

highnews
security
Feb 27, 2026

Google's API keys (simple identifiers that were designed only for billing purposes) unexpectedly gained the ability to authenticate access to private Gemini AI project data without any warning to developers. Researchers found 2,863 exposed keys that could let attackers steal files, datasets, and documents, or rack up expensive bills by running the AI model repeatedly.

Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

infonews
securityindustry

Sam Altman backs rival Anthropic in fight with Pentagon

infonews
policyindustry

Sam Altman aims to 'help de-escalate' tensions with Pentagon as OpenAI employees voice support for Anthropic

infonews
policyindustry

Nvidia's stock wrapping up tough week as Wall Street focuses more on competition than growth

infonews
industry
Feb 27, 2026

Despite strong earnings and growth forecasts, Nvidia's stock fell 6% this week as investors worry that spending by tech companies on AI infrastructure will peak soon and competition is increasing. Major AI companies like OpenAI and Meta are now diversifying away from Nvidia's GPUs (graphics processing units, specialized chips for AI computations) by adopting alternative chips from companies like Amazon, Google, and Advanced Micro Devices.

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

infonews
safetypolicy

Anthropic vs. the Pentagon: What’s actually at stake?

inforegulatory
policysafety

ChatGPT reaches 900M weekly active users

infonews
industry
Feb 27, 2026

ChatGPT has reached 900 million weekly active users and 50 million paying subscribers, with OpenAI reporting that subscriber growth accelerated significantly in early 2026. The company announced a $110 billion funding round, one of the largest private funding rounds ever, with major investments from Amazon, Nvidia, and SoftBank at a $730 billion valuation.

Previous28 / 62Next

Fix: DJI has fixed the vulnerability in response to the findings being reported.

Wired (Security)
The Guardian Technology
Feb 28, 2026

Google Cloud API keys (unique identifiers used for billing and accessing Google services) that were embedded in websites for basic functions like maps were automatically granted access to Gemini (Google's AI model) when users enabled the Gemini API on their projects, without any warning. This allowed attackers who found these exposed keys on the public internet to access private files, cached data, and run expensive AI requests that get billed to the victims, with nearly 3,000 such keys discovered by security researchers.

Fix: Google has implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API. Additionally, users are advised to: (1) check their Google Cloud projects to verify if AI-related APIs are enabled, (2) if they are enabled and publicly accessible in client-side JavaScript or public repositories, rotate the keys, starting with the oldest keys first, as those are most likely to have been deployed publicly under the old guidance that API keys were safe to share.

The Hacker News
Feb 27, 2026

The U.S. Pentagon designated Anthropic (an AI company) as a 'supply chain risk' after negotiations broke down over the company's refusal to allow its AI model Claude to be used for mass domestic surveillance or fully autonomous weapons systems. Anthropic argued these uses are unsafe and incompatible with democratic values, while the Pentagon insisted it needed unrestricted access to the technology for military operations.

The Hacker News
Feb 27, 2026

OpenAI reached an agreement with the U.S. Department of Defense to deploy its AI models on classified military networks, while the Trump administration simultaneously blacklisted rival Anthropic as a 'Supply-Chain Risk to National Security' and banned federal agencies from using Anthropic's technology. The key difference was that OpenAI agreed to the DoD's terms including safety restrictions on domestic mass surveillance and autonomous weapons, whereas Anthropic had refused to accept unrestricted military use cases and was seeking guarantees that its models wouldn't be used for fully autonomous weapons or mass surveillance.

Fix: According to Altman, OpenAI committed to building 'technical safeguards to ensure its models behave as they should' and will deploy personnel to 'help with our models and to ensure their safety.' Additionally, OpenAI asked the DoD to offer these same safety terms to all AI companies.

CNBC Technology
Feb 27, 2026

The US Secretary of Defense designated Anthropic, an AI company that makes Claude (an LLM, or large language model that generates text), as a supply-chain risk and banned its products from federal government use. This decision could affect major tech companies like Palantir and AWS that use Claude in their work with the Pentagon, though it's unclear how broadly the ban will apply to companies contracting with Claude for non-military purposes.

The Verge (AI)
CNBC Technology
TechCrunch
Feb 27, 2026

Anthropic, maker of the AI chatbot Claude, refused the Pentagon's demand to allow unrestricted military use of its technology, citing concerns about safeguards against mass surveillance and autonomous weapons (systems that make decisions without human control). President Trump ordered all federal agencies to stop using Anthropic's technology in response, escalating a public dispute within the AI industry about balancing national security needs with AI safety protections.

SecurityWeek
The Verge (AI)
Simon Willison's Weblog

Fix: Site administrators should check the GCP console for keys allowing the Generative Language API and look for unrestricted keys marked with a yellow warning icon. Exposed keys should be rotated or regenerated (replaced with new ones) with a grace period to avoid breaking apps using the old keys. Google's roadmap includes making API keys created through AI Studio default to Gemini-only access and blocking leaked keys while notifying customers when they detect them.

CSO Online
Feb 27, 2026

AI assistants designed to find security vulnerabilities (weaknesses in software that attackers can exploit) are not yet reliable enough for professional use, despite their potential to help find bugs faster. Experts say current AI tools have problems with both accuracy and speed, making them unsuitable for businesses and developers who need dependable security scanning.

Dark Reading
Feb 27, 2026

OpenAI CEO Sam Altman publicly supported rival company Anthropic in its dispute with the US Department of Defense over AI tool usage, stating that OpenAI shares Anthropic's refusal to allow certain uses like domestic surveillance and autonomous offensive weapons. The Pentagon has threatened Anthropic with retaliation, including invoking the Defense Production Act (a law letting the government use a company's products as it sees fit) or labeling the company a supply chain risk, but Anthropic maintains its position on restricting potentially harmful applications.

BBC Technology
Feb 27, 2026

OpenAI CEO Sam Altman sent an internal memo to staff expressing support for rival company Anthropic in a dispute with the Pentagon over AI model usage, stating that both companies oppose using AI for mass surveillance or fully autonomous weapons. About 70 OpenAI employees signed an open letter supporting Anthropic, which has a deadline to decide whether to allow the Department of Defense unrestricted access to its AI models. Altman indicated OpenAI is negotiating with the Pentagon to deploy its own models in classified environments while maintaining ethical boundaries around domestic surveillance and autonomous offensive weapons.

Fix: Altman proposed that OpenAI would ask for a contract with the Pentagon that covers "any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." He also stated the company would "build technical safeguards and deploy personnel to ensure things are working correctly" in classified environments.

CNBC Technology
CNBC Technology
Feb 27, 2026

In a deposition for his lawsuit against OpenAI, Elon Musk claimed that his company xAI prioritizes AI safety better than OpenAI, and that ChatGPT has caused mental health harms including suicides while Grok has not. Musk's lawsuit challenges OpenAI's transition from a nonprofit to a for-profit company, arguing that commercial interests compromise safety priorities, though xAI itself has faced safety issues including the generation of non-consensual intimate images by Grok.

TechCrunch
Feb 27, 2026

Anthropic and the U.S. Department of Defense are in conflict over how the military can use Anthropic's AI models. Anthropic refuses to allow its AI for mass surveillance of Americans or fully autonomous weapons (systems that select and fire at targets without human decision-makers), while the Pentagon argues it should be permitted to use the technology for any lawful purpose. The core dispute is whether the companies that build powerful AI systems or the government that deploys them should control how those systems are used.

TechCrunch
TechCrunch