aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1237 items

OpenAI COO says ads will be ‘an iterative process’

infonews
industry
Feb 25, 2026

OpenAI is rolling out ads to free and paid users of ChatGPT and says the process will be gradual and iterative. The company's COO emphasized that maintaining user privacy and trust is essential, and that well-designed ads can improve the user experience rather than detract from it.

TechCrunch

Claude Code Remote Control

infonews
security
Feb 25, 2026

Anthropic released a new Claude Code feature called "Remote Control" that lets you start a session on your computer and then control it remotely using Claude on web, iOS, and desktop apps by sending prompts to that session. The feature currently has several bugs, including permission approval issues, API errors, and problems with session termination, though the author expects these to be fixed soon.

Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

highnews
security
Feb 25, 2026

Researchers discovered three security vulnerabilities in Anthropic's Claude Code (an AI-powered coding assistant) that could allow attackers to run arbitrary commands on a developer's computer and steal API keys (authentication credentials) simply by tricking users into opening malicious project folders. The vulnerabilities exploited configuration files and automation systems to bypass safety prompts and execute malicious code without user consent.

OpenClaw creator’s advice to AI builders is to be more playful and allow yourself time to improve

infonews
industry
Feb 25, 2026

Peter Steinberger, creator of OpenClaw (an AI agent that works through WhatsApp), shares advice for developers building with AI: focus on exploration and experimentation rather than having a perfect plan from the start. He emphasizes that working with AI is a learnable skill, like learning guitar, and recommends approaching it playfully and iteratively rather than expecting immediate expertise.

The Blast Radius Problem: Stolen Credentials Are Weaponizing Agentic AI

infonews
security
Feb 25, 2026

According to IBM X-Force data from 2025, more than half of the 400,000 tracked vulnerabilities (56%) could be exploited without requiring authentication (the process of verifying who you are). This means attackers can exploit these security flaws without needing to log in or have legitimate access to a system.

About 12% of U.S. teens turn to AI for emotional support or advice

infonews
safetypolicy

Does Anthropic think Claude is alive? Define ‘alive’

infonews
safety
Feb 25, 2026

Anthropic executives have suggested in recent interviews that Claude (their AI model) might be alive or conscious in some way, though the company denies Claude is alive like biological organisms. The company avoids directly stating whether Claude is conscious, using the term "alive" as a loaded question while focusing on model welfare research.

Jira’s latest update allows AI agents and humans to work side by side

infonews
industry
Feb 25, 2026

Atlassian has released a new feature called 'agents in Jira' that lets teams assign work to AI agents (programs that can perform tasks automatically) from the same project management dashboard used for human workers. The update tracks agent progress, sets deadlines, and allows companies to compare how AI agents perform against human employees on the same projects, potentially helping enterprises decide where AI automation is most valuable.

Poisoning AI Training Data

infonews
securitysafety

Claude’s New AI Vulnerability Scanner Sends Cybersecurity Shares Plunging

infonews
industry
Feb 25, 2026

Stock prices for major cybersecurity companies have dropped significantly because of concerns that AI tools, specifically Claude's new vulnerability scanner (a tool that automatically finds security flaws in software), are disrupting the cybersecurity business.

Boards don’t need cyber metrics — they need risk signals

infonews
security
Feb 25, 2026

Security teams typically report many activity metrics (like blocked attacks and patched vulnerabilities), but experts argue that boards need different information: risk signals that show whether danger is increasing or decreasing and how fast the organization detects and contains problems. Effective board-level security reporting should focus on business impact (financial loss, regulatory exposure, operational disruption) rather than technical details, using measures like detection speed and containment time that non-technical decision-makers can understand.

Hacker knackt 600 Firewalls in einem Monat – mit KI

mediumnews
security
Feb 24, 2026

Between January and February 2026, a Russian-speaking hacker compromised over 600 Fortigate firewalls (network security devices that filter traffic) by first targeting ones with weak passwords, then using an AI tool based on Google Gemini to access other devices on the same networks. Security researchers at AWS found that the attacker's reconnaissance tools (software used to gather information about a system) were written in Go and Python and showed signs of AI-generated code, suggesting threat actors are increasingly using AI to automate and scale their attacks.

So verändert KI Ihre GRC-Strategie

infonews
policysecurity

India’s AI boom pushes firms to trade near-term revenue for users

infonews
industry
Feb 24, 2026

India has become the world's largest market for generative AI (artificial intelligence systems that can create text, images, and other content) app downloads in 2025, with installs jumping 207% year-over-year, but major AI companies like OpenAI and Google are now ending free promotional offers to convert users into paying subscribers. Despite India driving roughly 20% of global GenAI app downloads, it accounts for only about 1% of in-app purchases, and revenue has actually declined in recent months as companies rolled out cheaper or free options like ChatGPT Go. The challenge reflects a tension between rapid user growth and actual monetization (converting users into paying customers) in a price-sensitive market.

Pete Hegseth’s Pentagon AI bro squad includes a former Uber executive and a private equity billionaire

infonews
policy
Feb 24, 2026

This article discusses Pete Hegseth's appointments of prominent private-sector figures, including a former Uber executive and a private equity billionaire, to lead AI-related roles at the Pentagon's research and engineering division. The piece is part of a newsletter covering how wealthy influencers and business leaders are gaining influence over AI policy in Washington.

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

infonews
policysafety

Spanish ‘soonicorn’ Multiverse Computing releases free compressed AI model

infonews
industry
Feb 24, 2026

Multiverse Computing, a Spanish startup, has released a free compressed AI model called HyperNova 60B 2602 that reduces the size of large language models (AI systems trained on massive amounts of text) to make them cheaper and faster to use. The company uses CompactifAI, a compression technology inspired by quantum computing (using principles from quantum mechanics to process information), to create models that are roughly half the size of the original while maintaining similar performance and accuracy. The model is now available for free on Hugging Face (a platform where developers share AI models) and includes improved support for tool calling and agentic coding (where AI systems can use external tools or plan sequences of actions).

OpenAI defeats xAI’s trade secrets lawsuit

infonews
policy
Feb 24, 2026

OpenAI won a legal case against xAI, which had sued claiming that OpenAI stole its trade secrets (confidential information that gives a company a competitive advantage) and hired away its employees. The judge ruled that xAI failed to prove OpenAI actually did anything wrong, noting that while eight former xAI employees did move to OpenAI, there was no evidence that OpenAI directed them to steal anything.

US threatens Anthropic with deadline in dispute on AI safeguards

infonews
policysafety

A I-designed proteins may help spot cancer

infonews
industry
Feb 24, 2026

MIT and Microsoft researchers used AI to design molecular sensors (short proteins called peptides) that can detect early signs of cancer through a urine test. Nanoparticles coated with these peptides are activated by proteases (enzymes that are overactive in cancer cells), producing a detectable signal when excreted in urine. AI-designed peptides are more effective than older trial-and-error methods because they can be optimized to be highly sensitive and specific to particular cancer-linked proteases.

Previous32 / 62Next
Simon Willison's Weblog

Fix: All three vulnerabilities have been fixed in specific Claude Code versions: the first vulnerability was fixed in version 1.0.87 (September 2025), CVE-2025-59536 was fixed in version 1.0.111 (October 2025), and CVE-2026-21852 was fixed in version 2.0.65 (January 2026). Users should update to these versions or later.

The Hacker News
TechCrunch
SecurityWeek
Feb 25, 2026

About 12% of U.S. teenagers use AI chatbots for emotional support or advice, alongside more common uses like searching for information and getting homework help. Mental health professionals warn that general-purpose AI tools like ChatGPT are not designed for this purpose and can isolate users from real-world connections and relationships, potentially causing serious psychological harm.

Fix: Character.AI disabled chatbot access for users under 18 following lawsuits related to teen suicides. OpenAI sunset (discontinued) its GPT-4o model, which users had relied on for emotional support.

TechCrunch
The Verge (AI)
TechCrunch
Feb 25, 2026

A researcher demonstrated how easily AI systems can be manipulated by creating false information on a personal website, which major chatbots like Google's Gemini and ChatGPT then repeated as fact within 24 hours, showing that AI training data poisoning (deliberately adding fake information to the data used to teach AI models) is a serious problem because it's so simple to execute.

Schneier on Security
SecurityWeek
CSO Online

Fix: According to AWS security experts, the best protection against such attacks is to use strong passwords and enable Multi-Factor Authentication (MFA, a security method requiring multiple verification steps to prove identity). The report notes that the attacker repeatedly failed when attempting to compromise patched or hardened systems (computers updated with security fixes and configured defensively), so he targeted easier victims instead.

CSO Online
Feb 24, 2026

As companies adopt generative and agentic AI (AI systems that can take actions autonomously), they need to update their GRC (Governance, Risk & Compliance, the framework for managing rules, risks, and regulatory requirements) programs to account for AI-related risks. According to a 2025 security report, about 1 in 80 requests from company devices to AI services poses a high risk of exposing sensitive data, yet only 24% of companies have implemented comprehensive AI-GRC policies.

Fix: The source text recommends several explicit approaches: (1) Foster broad organizational acceptance of risk management across the company by promoting cooperation so all employees understand they must work together; (2) Develop both strategic and tactical approaches to define different types of AI tools, assess their relative risks, and weigh their potential benefits; (3) Use tactical measures including Secure-by-Design approaches (building security into AI tools from the start), initiatives to detect shadow AI (unauthorized AI use), and risk-based AI inventory and classification to focus resources on highest-impact risks without creating burdensome processes; (4) Make risks of specific AI measures transparent to business leadership rather than simply approving or rejecting AI use.

CSO Online
TechCrunch
The Verge (AI)
Feb 24, 2026

The U.S. Department of Defense is pressuring Anthropic, an AI company, to allow their technology to be used for surveillance and autonomous weapons systems (weapons that operate without human control) by threatening to label them a 'supply chain risk' that would prevent other defense contractors from using their AI. Anthropic has publicly stated these are 'bright red lines' they will not cross, and the article argues they should maintain this position rather than give in to government pressure.

EFF Deeplinks Blog
TechCrunch
The Verge (AI)
Feb 24, 2026

The US Pentagon is threatening to remove AI company Anthropic from its supply chain and invoke the Defense Production Act (a law allowing the government to compel companies to produce goods for national security) unless Anthropic allows unrestricted use of its Claude AI chatbot for military applications by Friday evening. Anthropic has refused to allow its technology for certain uses, including autonomous kinetic operations (AI making final targeting decisions without human input) and mass domestic surveillance, citing safety concerns.

BBC Technology
MIT Technology Review