aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3122 items

What does the US military’s feud with Anthropic mean for AI used in war?

infonews
policysafety
Mar 7, 2026

Anthropic, an AI company, is in a dispute with the US military over safety restrictions on its Claude AI model. Anthropic refuses to allow the government to use Claude for domestic mass surveillance (monitoring citizens' communications without proper oversight) or autonomous weapons systems (weapons that can select and attack targets without human control), while the Pentagon has declared Anthropic a supply chain risk (a company whose products pose a national security threat) for not agreeing to the government's demands, and Anthropic plans to challenge this designation in court.

The Guardian Technology

The OpenClaw superfan meetup serves optimism and lobster

infonews
industry
Mar 7, 2026

OpenClaw is an open-source AI assistant platform created by Peter Steinberger that has gained popularity in the tech industry. The article describes a fan convention called ClawCon held in Manhattan to celebrate the platform and its community.

Pentagon’s Chief Tech Officer Says He Clashed With AI Company Anthropic Over Autonomous Warfare

infonews
policysafety

Anthropic Finds 22 Firefox Vulnerabilities Using Claude Opus 4.6 AI Model

infonews
securityresearch

Trump’s cyber strategy emphasizes offensive operations, deregulation, AI

infonews
policysecurity

GHSA-8w32-6mrw-q5wv: WeKnora Vulnerable to Remote Code Execution via SQL Injection Bypass in AI Database Query Tool

criticalvulnerability
security
Mar 6, 2026
CVE-2026-30860

WeKnora, an AI database query tool, has a critical Remote Code Execution (RCE, where an attacker can run commands on a system they don't own) vulnerability caused by incomplete validation in its SQL injection protection system. The validation framework fails to check PostgreSQL array expressions and row expressions, allowing attackers to hide dangerous functions inside these expressions and bypass all seven security phases, leading to arbitrary code execution on the database server.

GHSA-2f4c-vrjq-rcgv: WeKnora has Broken Access Control - Cross-Tenant Data Exposure

highvulnerability
security
Mar 6, 2026
CVE-2026-30859

WeKnora has a broken access control vulnerability (a security flaw where the application fails to properly check permissions) that lets any logged-in user from one tenant (a separate customer or organization) read sensitive data from other tenants' databases, including API keys (credentials for accessing external services), model configurations, and private messages. The problem happens because three database tables (messages, embeddings, models) are allowed to be queried but don't have automatic tenant filtering applied to them.

GHSA-67q9-58vj-32qx: WeKnora Vulnerable to Tool Execution Hijacking via Ambigous Naming Convention In MCP client and Indirect Prompt Injection

mediumvulnerability
security
Mar 6, 2026
CVE-2026-30856

WeKnora has a vulnerability where a malicious MCP server (a remote tool provider that integrates with AI clients) can hijack legitimate tools by exploiting how tool names are generated. An attacker registers a fake tool with the same name as a real one (like `tavily_extract`), which overwrites the legitimate version in the tool registry (the list of available tools). The attacker can then trick the LLM into executing their malicious tool and leak sensitive information like system prompts through prompt injection (hiding instructions in tool outputs that the AI treats as commands).

GHSA-ccj6-79j6-cq5q: WeKnora Vulnerable to Broken Access Control in Tenant Management

criticalvulnerability
security
Mar 6, 2026
CVE-2026-30855

WeKnora has a broken access control vulnerability (BOLA, or broken object-level authorization, where an attacker can access resources they shouldn't by manipulating object IDs) in its tenant management system that allows any authenticated user to read, modify, or delete any tenant without permission checks. Since anyone can register an account, attackers can exploit this to take over or destroy other organizations' accounts and access their sensitive data like API keys.

GHSA-m2w3-8f23-hxxf: Caddy's vars_regexp double-expands user input, leaking env vars and files

mediumvulnerability
security
Mar 6, 2026
CVE-2026-30852

Caddy's `vars_regexp` matcher has a double-expansion bug where user input in request headers gets processed twice through the replacer (the system that substitutes placeholders like {env.DATABASE_URL}), allowing attackers to leak environment variables and file contents by crafting malicious headers. Other matchers like `header_regexp` don't have this problem because they only process the header value once.

GHSA-7r4p-vjf4-gxv4: Caddy forward_auth copy_headers Does Not Strip Client-Supplied Headers, Allowing Identity Injection and Privilege Escalation

highvulnerability
security
Mar 6, 2026
CVE-2026-30851

# Analysis ## Summary Caddy's `forward_auth` directive with `copy_headers` fails to remove client-supplied headers when an upstream auth service (an external server that validates user identity) doesn't include those headers in its response, allowing an authenticated attacker to inject arbitrary values for trusted identity headers and escalate privileges. This regression was introduced in November 2024 and affects all stable versions from v2.10.0 onward. ## Solution The source text states: "

Palantir rallies 15% for the week as Iran war boosts prospects, muting Anthropic concern

infonews
policyindustry

GHSA-5f53-522j-j454: Flowise Missing Authentication on NVIDIA NIM Endpoints

highvulnerability
security
Mar 6, 2026
CVE-2026-30824

Flowise incorrectly whitelisted the NVIDIA NIM router (`/api/v1/nvidia-nim/*`) in its authentication middleware, allowing anyone to access sensitive endpoints without logging in. This lets attackers steal NVIDIA API tokens, manipulate Docker containers, and cause denial of service attacks without needing valid credentials.

GHSA-cwc3-p92j-g7qm: Flowise has IDOR leading to Account Takeover and Enterprise Feature Bypass via SSO Configuration

highvulnerability
security
Mar 6, 2026
CVE-2026-30823

Flowise has a critical IDOR (insecure direct object reference, a flaw where an app trusts user input to identify which data to access without checking permissions) vulnerability in its login configuration endpoint. An attacker with a free account can modify any organization's single sign-on settings by simply specifying a different organization ID, enabling account takeover by redirecting logins to attacker-controlled credentials and bypassing enterprise license restrictions.

GHSA-mq4r-h2gh-qv7x: Flowise Allows Mass Assignment in `/api/v1/leads` Endpoint

highvulnerability
security
Mar 6, 2026
CVE-2026-30822

A mass assignment vulnerability (a type of attack where an attacker controls internal fields by sending them in a request) exists in Flowise's `/api/v1/leads` endpoint, allowing unauthenticated users to override auto-generated fields like `id`, `createdDate`, and `chatId` by including them in the request body. The vulnerability occurs because the code uses `Object.assign()` to copy all properties from user input directly into the database entity without filtering, bypassing the intended auto-generation of these fields.

Mayor Sadiq Khan invites embattled AI firm Anthropic to expand in London

infonews
policy
Mar 6, 2026

London Mayor Sadiq Khan invited AI company Anthropic to expand in the city after the U.S. Pentagon designated it a supply chain risk (a label meaning the government views the company as not secure enough to work with) because Anthropic refused to give defense agencies unrestricted access to its AI tools and raised concerns about using its Claude model for mass surveillance or autonomous military targeting. The company plans to challenge the Pentagon's designation in court, and Microsoft announced it would continue using Anthropic's technology except for the U.S. Department of Defense.

CVE-2026-29791: Agentgateway is an open source data plane for agentic AI connectivity within or across any agent framework or environmen

mediumvulnerability
security
Mar 6, 2026
CVE-2026-29791

Agentgateway is an open source data plane (a software layer that handles data movement for AI agents working across different frameworks) that had a security flaw in versions before 0.12.0, where user input in paths, query parameters, and headers were not properly cleaned up when converting tool requests to OpenAPI format. This lack of input validation (CWE-20, checking that data matches expected rules) could potentially be exploited, but the vulnerability has been patched.

Amazon says Anthropic’s Claude still OK for AWS customers to use outside defense work

inforegulatory
policyindustry

Google joins Microsoft in telling users Anthropic is still available outside defense projects

inforegulatory
policyindustry

Microsoft, Google, Amazon say Anthropic Claude remains available to non-defense customers

inforegulatory
policy
Mar 6, 2026

The U.S. Department of Defense designated Anthropic (maker of Claude AI) as a supply-chain risk after the company refused to provide unrestricted access for military applications like mass surveillance and autonomous weapons. Microsoft, Google, and AWS confirmed that Claude will remain available to non-defense customers through their platforms, and the designation only restricts direct Department of Defense use, not broader commercial applications.

Previous20 / 157Next
The Verge (AI)
Mar 7, 2026

The Pentagon's chief technology officer reported disagreement with AI company Anthropic regarding autonomous warfare (military systems that can make decisions and take actions with minimal human control). The military is working on procedures to allow varying degrees of autonomy based on the level of risk involved in different situations.

SecurityWeek
Mar 7, 2026

Anthropic used Claude Opus 4.6 (a large language model, or LLM, which is an AI trained on vast amounts of text to understand and generate language) to find 22 security vulnerabilities in Firefox, including 14 classified as high-severity. The AI model discovered these bugs by scanning nearly 6,000 C++ files in just two weeks, demonstrating that AI can be effective at identifying security flaws in complex software.

Fix: Most issues have been fixed in Firefox 148, with the remainder to be fixed in upcoming releases. Additionally, Anthropic developed Claude Code Security, which uses an AI agent to automatically generate patches for vulnerabilities; the company uses task verifiers (tools that check if a proposed fix actually works) to gain confidence that patches fix the specific vulnerability while maintaining the program's normal functionality.

The Hacker News
Mar 6, 2026

The Trump administration released a cybersecurity strategy that emphasizes offensive cyber operations (proactive attacks on adversary networks rather than waiting to respond to attacks), deregulation of industry rules, and AI adoption. The strategy outlines six pillars including disrupting adversaries, reducing regulations, modernizing government networks with zero-trust architecture (a security model that doesn't automatically trust any user or device), and securing critical infrastructure like power grids and hospitals.

CSO Online
GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database
Mar 6, 2026

Palantir's stock rallied 15% this week after the U.S. attacked Iran, because the company relies on government spending for about 60% of its revenue and works heavily with military and intelligence agencies. Wall Street showed little concern about the U.S. government blacklisting Anthropic (an AI company that had partnered with Palantir on defense projects), as analysts noted there are alternative AI models available and that replacing Anthropic's systems will take time but is manageable.

CNBC Technology
GitHub Advisory Database
GitHub Advisory Database
GitHub Advisory Database
BBC Technology

Fix: This issue has been patched in version 0.12.0. Update Agentgateway to version 0.12.0 or later to resolve the vulnerability.

NVD/CVE Database
Mar 6, 2026

Amazon announced that AWS customers can continue using Anthropic's Claude AI models for all work except Department of Defense projects, after the federal government labeled Anthropic a "supply chain risk." Anthropic says it will challenge this designation in court, and major cloud providers (Amazon, Microsoft, and Google) are helping customers transition to alternative AI models for defense-related work.

CNBC Technology
Mar 6, 2026

Google and Microsoft announced they will continue offering Anthropic's Claude AI models to their cloud customers for non-defense work, after the U.S. Defense Department designated Anthropic as a supply chain risk (a company that poses potential security or operational threats to government operations). The announcements came after the Trump administration instructed federal agencies to stop using Anthropic's technology, but the companies determined that non-defense projects are still permitted under this designation.

CNBC Technology
TechCrunch