aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3137 items

CVE-2026-28414: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.7, Gradio apps running on Win

highvulnerability
security
Feb 27, 2026
CVE-2026-28414

Gradio (an open-source Python package for building web interfaces quickly) has a vulnerability in versions before 6.7 on Windows with Python 3.13 and newer that allows attackers to read any file from the server by exploiting a flaw in how the software checks if file paths are absolute (starting from the root directory). The vulnerability exists because Python 3.13 changed how it defines absolute paths, breaking Gradio's protections against path traversal (accessing files outside intended directories).

Fix: Update Gradio to version 6.7 or later, which fixes the issue.

NVD/CVE Database

CVE-2026-27167: Gradio is an open-source Python package designed for quick prototyping. Starting in version 4.16.0 and prior to version

nonevulnerability
security
Feb 27, 2026
CVE-2026-27167

Gradio, a Python package for building web interfaces, has a security flaw in versions 4.16.0 through 6.5.x where it automatically enables fake OAuth routes (authentication shortcuts) that accidentally expose the server owner's Hugging Face access token (a credential used to authenticate with Hugging Face services) to anyone who visits the login page. An attacker can steal this token because the session cookie (a small file storing login information) is signed with a hardcoded secret, making it easy to decode.

Pentagon moves to designate Anthropic as a supply-chain risk

inforegulatory
policy
Feb 27, 2026

President Trump directed federal agencies to stop using Anthropic's AI products and gave them six months to phase out usage, after the company disputed with the Department of Defense. The Pentagon's Secretary of Defense designated Anthropic as a supply-chain risk to national security, meaning military contractors can no longer do business with the company, because Anthropic refused to let its AI models be used for mass domestic surveillance or fully autonomous weapons (systems that make decisions and take action without human control).

Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology

infonews
policysafety

Trump orders federal agencies to drop Anthropic’s AI

infonews
policy
Feb 27, 2026

President Trump ordered federal agencies to stop using Claude (an AI system made by Anthropic) after the company's CEO refused to sign a military agreement that would allow unlimited use of their technology. The disagreement centers on whether Anthropic's AI should be available for all military purposes, including domestic surveillance.

An AI agent coding skeptic tries AI agent coding, in excessive detail

infonews
industry
Feb 27, 2026

A software developer who was skeptical about AI coding agents discovered they have become significantly more capable, using them to build increasingly complex projects including a Rust implementation of machine learning algorithms. The developer notes that recent AI coding models (like Opus 4.6 and Codex 5.3) are dramatically better than earlier versions, but this improvement is hard to communicate publicly without sounding like promotional hype.

‘Silent’ Google API key change exposed Gemini AI data

highnews
security
Feb 27, 2026

Google's API keys (simple identifiers that were designed only for billing purposes) unexpectedly gained the ability to authenticate access to private Gemini AI project data without any warning to developers. Researchers found 2,863 exposed keys that could let attackers steal files, datasets, and documents, or rack up expensive bills by running the AI model repeatedly.

Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

infonews
securityindustry

Sam Altman backs rival Anthropic in fight with Pentagon

infonews
policyindustry

Sam Altman aims to 'help de-escalate' tensions with Pentagon as OpenAI employees voice support for Anthropic

infonews
policyindustry

Nvidia's stock wrapping up tough week as Wall Street focuses more on competition than growth

infonews
industry
Feb 27, 2026

Despite strong earnings and growth forecasts, Nvidia's stock fell 6% this week as investors worry that spending by tech companies on AI infrastructure will peak soon and competition is increasing. Major AI companies like OpenAI and Meta are now diversifying away from Nvidia's GPUs (graphics processing units, specialized chips for AI computations) by adopting alternative chips from companies like Amazon, Google, and Advanced Micro Devices.

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

infonews
safetypolicy

Anthropic vs. the Pentagon: What’s actually at stake?

inforegulatory
policysafety

ChatGPT reaches 900M weekly active users

infonews
industry
Feb 27, 2026

ChatGPT has reached 900 million weekly active users and 50 million paying subscribers, with OpenAI reporting that subscriber growth accelerated significantly in early 2026. The company announced a $110 billion funding round, one of the largest private funding rounds ever, with major investments from Amazon, Nvidia, and SoftBank at a $730 billion valuation.

Free Claude Max for (large project) open source maintainers

infonews
industry
Feb 27, 2026

Anthropic is offering free access to Claude Max (their $200/month AI assistant plan) for six months to open source maintainers who meet specific criteria: primary maintainers of public repositories with 5,000+ GitHub stars or 1 million+ monthly NPM downloads, with recent commits or reviews in the last three months. The program accepts up to 10,000 contributors, and maintainers who don't quite meet the stated criteria can still apply and explain their importance to the ecosystem.

AI vs. the Pentagon: killer robots, mass surveillance, and red lines

infonews
policysafety

Perplexity’s new Computer is another bet that users need many AI models

infonews
industry
Feb 27, 2026

Perplexity has launched Computer, an agentic tool (software that can independently execute complex tasks) that combines 19 different AI models to handle workflows like data collection, analysis, and report creation. The tool runs in the cloud and is available only to subscribers of Perplexity Max (the $200/month tier), though a planned demo was canceled hours before a press event due to flaws discovered in the product.

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

inforegulatory
policy
Feb 27, 2026

Anthropic, an AI company, is refusing the Pentagon's demands for unrestricted access to its AI technology, specifically opposing its use for domestic mass surveillance (tracking citizens without limits) and fully autonomous weapons (weapons that make kill decisions without human control). Over 300 Google employees and 60 OpenAI employees signed an open letter supporting Anthropic's stance, and leaders at both companies have informally expressed sympathy for Anthropic's position, though the Pentagon has threatened to declare Anthropic a security risk or use the Defense Production Act (a law allowing the government to force companies to produce needed goods) if it doesn't comply.

We don’t have to have unsupervised killer robots

infonews
policysafety

In Defense-Anthropic clash, AI is real-time testing the balance of power in future of warfare

infonews
policyindustry
Previous32 / 157Next

Fix: Update to Gradio version 6.6.0, which fixes the issue.

NVD/CVE Database
TechCrunch
Feb 27, 2026

Anthropic, maker of the AI chatbot Claude, refused the Pentagon's demand to allow unrestricted military use of its technology, citing concerns about safeguards against mass surveillance and autonomous weapons (systems that make decisions without human control). President Trump ordered all federal agencies to stop using Anthropic's technology in response, escalating a public dispute within the AI industry about balancing national security needs with AI safety protections.

SecurityWeek
The Verge (AI)
Simon Willison's Weblog

Fix: Site administrators should check the GCP console for keys allowing the Generative Language API and look for unrestricted keys marked with a yellow warning icon. Exposed keys should be rotated or regenerated (replaced with new ones) with a grace period to avoid breaking apps using the old keys. Google's roadmap includes making API keys created through AI Studio default to Gemini-only access and blocking leaked keys while notifying customers when they detect them.

CSO Online
Feb 27, 2026

AI assistants designed to find security vulnerabilities (weaknesses in software that attackers can exploit) are not yet reliable enough for professional use, despite their potential to help find bugs faster. Experts say current AI tools have problems with both accuracy and speed, making them unsuitable for businesses and developers who need dependable security scanning.

Dark Reading
Feb 27, 2026

OpenAI CEO Sam Altman publicly supported rival company Anthropic in its dispute with the US Department of Defense over AI tool usage, stating that OpenAI shares Anthropic's refusal to allow certain uses like domestic surveillance and autonomous offensive weapons. The Pentagon has threatened Anthropic with retaliation, including invoking the Defense Production Act (a law letting the government use a company's products as it sees fit) or labeling the company a supply chain risk, but Anthropic maintains its position on restricting potentially harmful applications.

BBC Technology
Feb 27, 2026

OpenAI CEO Sam Altman sent an internal memo to staff expressing support for rival company Anthropic in a dispute with the Pentagon over AI model usage, stating that both companies oppose using AI for mass surveillance or fully autonomous weapons. About 70 OpenAI employees signed an open letter supporting Anthropic, which has a deadline to decide whether to allow the Department of Defense unrestricted access to its AI models. Altman indicated OpenAI is negotiating with the Pentagon to deploy its own models in classified environments while maintaining ethical boundaries around domestic surveillance and autonomous offensive weapons.

Fix: Altman proposed that OpenAI would ask for a contract with the Pentagon that covers "any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." He also stated the company would "build technical safeguards and deploy personnel to ensure things are working correctly" in classified environments.

CNBC Technology
CNBC Technology
Feb 27, 2026

In a deposition for his lawsuit against OpenAI, Elon Musk claimed that his company xAI prioritizes AI safety better than OpenAI, and that ChatGPT has caused mental health harms including suicides while Grok has not. Musk's lawsuit challenges OpenAI's transition from a nonprofit to a for-profit company, arguing that commercial interests compromise safety priorities, though xAI itself has faced safety issues including the generation of non-consensual intimate images by Grok.

TechCrunch
Feb 27, 2026

Anthropic and the U.S. Department of Defense are in conflict over how the military can use Anthropic's AI models. Anthropic refuses to allow its AI for mass surveillance of Americans or fully autonomous weapons (systems that select and fire at targets without human decision-makers), while the Pentagon argues it should be permitted to use the technology for any lawful purpose. The core dispute is whether the companies that build powerful AI systems or the government that deploys them should control how those systems are used.

TechCrunch
TechCrunch
Simon Willison's Weblog
Feb 27, 2026

Anthropic is refusing to accept new Pentagon contract terms that would remove safety restrictions (guardrails, the built-in limits on what an AI model will do) from its AI models, which would allow uses like mass surveillance of Americans and fully autonomous lethal weapons (weapons that can select and fire at targets without human control). Despite pressure from the Pentagon, including threats to label Anthropic a supply chain risk (a designation suggesting it poses a national security threat), CEO Dario Amodei says the company will not compromise on these ethical boundaries, while competitors OpenAI and xAI have reportedly agreed to the terms.

The Verge (AI)
TechCrunch
TechCrunch
Feb 27, 2026

The Pentagon is pressuring Anthropic (an AI company) to remove safety restrictions on its technology or face being labeled a 'supply chain risk' that could cost it billions in contracts. The pressure includes demands for military access to the AI for surveillance and autonomous weapons systems, raising concerns among tech workers about how their work might be used.

The Verge (AI)
Feb 27, 2026

The U.S. Department of Defense is in a standoff with Anthropic, an AI company, over whether the company will remove safeguards from its AI models to allow military uses like mass domestic surveillance and fully autonomous weapons (systems that can make combat decisions without human control). This conflict highlights a major shift in power: private companies now control cutting-edge AI technology rather than governments, forcing the Pentagon to negotiate with industry over how AI will be deployed in national security and warfare.

CNBC Technology