aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1235 items

Free Claude Max for (large project) open source maintainers

infonews
industry
Feb 27, 2026

Anthropic is offering free access to Claude Max (their $200/month AI assistant plan) for six months to open source maintainers who meet specific criteria: primary maintainers of public repositories with 5,000+ GitHub stars or 1 million+ monthly NPM downloads, with recent commits or reviews in the last three months. The program accepts up to 10,000 contributors, and maintainers who don't quite meet the stated criteria can still apply and explain their importance to the ecosystem.

Simon Willison's Weblog

AI vs. the Pentagon: killer robots, mass surveillance, and red lines

infonews
policysafety

Perplexity’s new Computer is another bet that users need many AI models

infonews
industry
Feb 27, 2026

Perplexity has launched Computer, an agentic tool (software that can independently execute complex tasks) that combines 19 different AI models to handle workflows like data collection, analysis, and report creation. The tool runs in the cloud and is available only to subscribers of Perplexity Max (the $200/month tier), though a planned demo was canceled hours before a press event due to flaws discovered in the product.

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

inforegulatory
policy
Feb 27, 2026

Anthropic, an AI company, is refusing the Pentagon's demands for unrestricted access to its AI technology, specifically opposing its use for domestic mass surveillance (tracking citizens without limits) and fully autonomous weapons (weapons that make kill decisions without human control). Over 300 Google employees and 60 OpenAI employees signed an open letter supporting Anthropic's stance, and leaders at both companies have informally expressed sympathy for Anthropic's position, though the Pentagon has threatened to declare Anthropic a security risk or use the Defense Production Act (a law allowing the government to force companies to produce needed goods) if it doesn't comply.

We don’t have to have unsupervised killer robots

infonews
policysafety

In Defense-Anthropic clash, AI is real-time testing the balance of power in future of warfare

infonews
policyindustry

OpenAI announces $110 billion funding round with backing from Amazon, Nvidia, SoftBank

infonews
industry
Feb 27, 2026

OpenAI announced a $110 billion funding round led by Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), raising the company's valuation to $730 billion. Beyond the investment, Amazon committed to an expanded $100 billion partnership over eight years to use AWS (Amazon Web Services, Amazon's cloud computing platform) as OpenAI's exclusive cloud provider and to develop customized AI models for Amazon's applications.

In Other News: ATT&CK Advisory Council, Russian Cyberattacks Aid Missile Strikes, Predator Bypasses iOS Indicators

infonews
securityindustry

The Galaxy S26 is a photography nightmare

infonews
security
Feb 27, 2026

Samsung's Galaxy S26 phones include useful new features like a Privacy Display on the Ultra model, but the new camera features are described as problematic and concerning rather than helpful upgrades. The article discusses these camera issues on The Vergecast podcast but does not provide specific technical details about what makes them problematic.

OpenAI snags $110 billion in investments from Amazon, Nvidia, and Softbank

infonews
industry
Feb 27, 2026

OpenAI has secured $110 billion in new funding from Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), bringing the company's valuation to $730 billion. The investment includes plans for custom AI models and reflects confidence in OpenAI's ChatGPT platform, which has over 900 million weekly active users and 50 million consumer subscribers.

Anthropic faces lose-lose scenario in Pentagon conflict as deadline for policy change looms

inforegulatory
policy
Feb 27, 2026

Anthropic, an AI startup, faces a Friday deadline to allow the U.S. Department of Defense to use its AI models without restrictions, or face severe penalties like being labeled a 'supply chain risk' (a designation that blocks government contractors from using the company's technology). The company has refused, saying it won't agree to uses it believes could undermine democracy, such as fully autonomous weapons or domestic mass surveillance, putting it in conflict between maintaining its reputation for responsible AI and losing significant military contracts and revenue.

OpenAI raises $110B in one of the largest private funding rounds in history

infonews
industry
Feb 27, 2026

OpenAI has secured $110 billion in private funding from major investors including Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), making it one of the largest private funding rounds ever. The company plans to use this capital to scale its AI infrastructure globally, including building new runtime environments on Amazon's cloud services and committing to use significant computing power from both Amazon and Nvidia. This funding round reflects OpenAI's goal to move frontier AI (advanced AI systems at the cutting edge of research) from research phase into widespread daily use across the world.

Claude Code Security Shows Promise, Not Perfection

infonews
securityresearch

Netflix drops its WBD bid, Block layoffs, Anthropic's DOD deadline and more in Morning Squawk

infonews
industrypolicy

Anthropic Refuses to Bend to Pentagon on AI Safeguards as Dispute Nears Deadline

infonews
policysafety

Your personal OpenClaw agent may also be taking orders from malicious websites

highnews
security
Feb 27, 2026

Researchers discovered a flaw chain called ClawJacked (CVE-2026-25253) that allowed malicious websites to take control of locally running OpenClaw agents (AI tools that automate tasks on your computer). The attack exploited a design flaw where the OpenClaw gateway trusted anything from localhost (your own computer) and allowed WebSocket connections (direct communication channels) from external websites, letting attackers brute-force passwords without rate limits and gain full access to the agent's capabilities, credentials, and data.

How to make LLMs a defensive advantage without creating a new attack surface

infonews
securitysafety

Ransomware groups switch to stealthy attacks and long-term access

infonews
security
Feb 27, 2026

Ransomware attackers are shifting from loud, disruptive attacks toward stealthy, long-term infiltration tactics where they quietly steal data for extortion rather than encrypting it. They're using defense evasion (techniques to avoid detection) and persistence mechanisms to stay hidden, routing their command-and-control traffic (communications between attackers and compromised systems) through legitimate business services like OpenAI and AWS to blend in with normal activity. Attackers are also chaining multiple vulnerabilities together in coordinated exploitation rather than treating each weakness as an isolated entry point.

Anthropic boss rejects Pentagon demand to drop AI safeguards

infonews
policysafety

Burger King cooks up AI chatbot to spot if employees say ‘please’ and ‘thank you’

infonews
industry
Feb 26, 2026

Burger King is deploying an AI chatbot powered by OpenAI (the company behind ChatGPT) that listens to employee headsets at hundreds of US locations to monitor whether workers use polite words like 'please' and 'thank you.' The company says the system, called BK Assistant, will help understand service patterns, though the announcement has sparked criticism from workers.

Previous29 / 62Next
Feb 27, 2026

Anthropic is refusing to accept new Pentagon contract terms that would remove safety restrictions (guardrails, the built-in limits on what an AI model will do) from its AI models, which would allow uses like mass surveillance of Americans and fully autonomous lethal weapons (weapons that can select and fire at targets without human control). Despite pressure from the Pentagon, including threats to label Anthropic a supply chain risk (a designation suggesting it poses a national security threat), CEO Dario Amodei says the company will not compromise on these ethical boundaries, while competitors OpenAI and xAI have reportedly agreed to the terms.

The Verge (AI)
TechCrunch
TechCrunch
Feb 27, 2026

The Pentagon is pressuring Anthropic (an AI company) to remove safety restrictions on its technology or face being labeled a 'supply chain risk' that could cost it billions in contracts. The pressure includes demands for military access to the AI for surveillance and autonomous weapons systems, raising concerns among tech workers about how their work might be used.

The Verge (AI)
Feb 27, 2026

The U.S. Department of Defense is in a standoff with Anthropic, an AI company, over whether the company will remove safeguards from its AI models to allow military uses like mass domestic surveillance and fully autonomous weapons (systems that can make combat decisions without human control). This conflict highlights a major shift in power: private companies now control cutting-edge AI technology rather than governments, forcing the Pentagon to negotiate with industry over how AI will be deployed in national security and warfare.

CNBC Technology
CNBC Technology
Feb 27, 2026

This article briefly mentions several cyber security developments, including OpenAI taking action against malicious uses of AI, a hacker group claiming to have breached Odido (a telecommunications company), and a spyware tool called Predator that can bypass iOS security indicators (the visual signals that show when an app is accessing your device's features).

SecurityWeek
The Verge (AI)
The Verge (AI)
CNBC Technology
TechCrunch
Feb 27, 2026

Claude Code, an AI tool for writing software, generated excitement when it was released, but researchers studying it have found that its actual performance and security capabilities are not as impressive as initial claims suggested. The article indicates that people were too optimistic about what the tool could do.

Dark Reading
Feb 27, 2026

Anthropic, an AI startup, is refusing to let the U.S. Defense Department use its AI models without restrictions on fully autonomous weapons (weapons that make decisions without human control) and mass domestic surveillance. The Pentagon wants unlimited use of Anthropic's models and set a deadline for the company to agree, threatening to label them a supply chain risk (a company whose failure could disrupt critical systems) if they don't comply.

CNBC Technology
Feb 27, 2026

Anthropic, an AI company, is in a dispute with the Pentagon over safeguards for its Claude AI system. The company is asking for specific guarantees that Claude won't be used for mass surveillance (monitoring large populations without consent) of Americans or in fully autonomous weapons (military systems that make lethal decisions without human control).

SecurityWeek

Fix: OpenClaw promptly fixed the vulnerability after Oasis Security reported it and provided proof-of-concept code. No additional details about the specific fix are provided in the source text.

CSO Online
Feb 27, 2026

LLMs are being used in security in three ways: as productivity tools for analysts, as embedded components in security products, and as targets for attackers to manipulate or steal. The same capabilities that help security teams (like summarizing incidents or drafting detection logic) can also enable attackers to create convincing phishing emails or extract sensitive information if the LLM is poorly integrated. To use LLMs defensively without creating new vulnerabilities, security teams should treat LLM output as untrusted, start with narrow, easy-to-verify use cases, and design systems with three layers of constraints: limited model capabilities, restricted data access, and human approval for any actions that change system state.

Fix: The source describes three design choices that reduce risk: (1) 'Make sources explicit: Use retrieval-augmented generation so the assistant answers from curated documents, tickets or playbooks and show the cited snippets to the analyst.' (2) 'Keep the model out of the blast radius: The model should not hold secrets. Use short-lived credentials, scoped tokens and brokered access to tools.' (3) 'Gate actions: Anything that changes a system state (blocking, quarantining, deleting, emailing) should require human approval or a separate policy engine.' The source also recommends starting with a 'narrow set of workflows where the output is advisory and easy to verify' before expanding capabilities.

CSO Online
CSO Online
Feb 26, 2026

Anthropic's CEO Dario Amodei is refusing the US Department of Defense's demand to remove safeguards from the company's AI tool Claude, saying the company would rather lose Pentagon contracts than allow its technology to be used for mass domestic surveillance or fully autonomous weapons (AI systems that make attack decisions without human control). The Pentagon has threatened to remove Anthropic from its supply chain and invoke the Defense Production Act if the company doesn't comply.

BBC Technology
The Guardian Technology