aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1220 items

Federal judge sides with Anthropic in first round of standoff with Pentagon

infonews
policy
Mar 26, 2026

Anthropic won a temporary legal victory when a federal judge ordered a pause on the Department of Defense's punishment of the company, which had refused to let the military use its Claude AI model in autonomous weapons systems (systems that can make attack decisions without human control). Anthropic claimed the government violated its free speech rights by declaring it a supply chain risk (a company whose products could be exploited to harm national security) and blocking agencies from using its technology.

The Guardian Technology

OpenAI ads pilot tops $100 million in annualized revenue in under 2 months

infonews
industry
Mar 26, 2026

OpenAI has launched an advertising pilot program in ChatGPT that generated over $100 million in annual revenue within two months, working with more than 600 advertisers. The ads appear at the bottom of ChatGPT responses, are clearly labeled, and do not influence the AI's answers or appear near sensitive topics like politics or health. Users under 18 are excluded from seeing ads, and OpenAI reports no negative impact on user trust metrics.

Preparing for agentic AI: A financial services approach

infonews
securitypolicy

Google is making it easier to import another AI’s memory into Gemini

infonews
industry
Mar 26, 2026

Google Gemini is adding new features that let users transfer their chat history and memory from other AI assistants into Gemini. The "Import Memory" tool works by copying a prompt from Gemini into your previous AI, then pasting the response back into Gemini, while "Import Chat History" lets you export all your past conversations from another AI and upload them to Gemini.

Apple will reportedly allow other AI chatbots to plug into Siri

infonews
industry
Mar 26, 2026

Apple's upcoming iOS 27 update will let users choose which AI chatbot to connect with Siri (Apple's voice assistant), including options like Google's Gemini or Anthropic's Claude downloaded from the App Store. The new feature, called "Extensions," will allow users to enable or disable different chatbots across iPhones, iPads, and Macs, expanding beyond the current ChatGPT integration.

CISA: New Langflow flaw actively exploited to hijack AI workflows

criticalnews
security
Mar 26, 2026

CISA warns that hackers are actively exploiting CVE-2026-33017, a critical vulnerability (rated 9.3 out of 10) in Langflow, an open-source framework for building AI workflows. This code injection flaw allows attackers to execute arbitrary Python code and gain remote code execution (the ability to run commands on a system they don't own) on unpatched systems running version 1.8.1 or earlier, with exploitation beginning just 20 hours after the vulnerability details were made public.

The CISO’s guide to responding to shadow AI

infonews
policysecurity

Google’s ‘live’ AI search assistant can handle conversations in dozens more languages

infonews
industry
Mar 26, 2026

Google is expanding Search Live, an AI search assistant that lets users search the web using their voice and camera to ask questions about physical objects or tasks. The feature, which initially launched in the US, is now available in over 200 countries and territories in dozens of languages, with Google powering this global expansion using its latest technology.

Gemini 3.1 Flash Live: Making audio AI more natural and reliable

infonews
industry
Mar 26, 2026

Google has released Gemini 3.1 Flash Live, a new audio model that makes voice conversations with AI sound more natural and reliable by understanding tone better and responding faster. Developers can use it through the Gemini Live API to build voice agents for complex tasks, while regular users can access it through Search Live and Gemini Live across over 200 countries. The model includes audio watermarking (a hidden digital marker added to audio to verify its source) to help prevent misinformation.

Wikipedia bans AI-generated articles

infonews
policy
Mar 26, 2026

Wikipedia has banned editors from using AI to write or rewrite articles, citing violations of the site's content policies. However, the ban allows limited AI use for specific tasks like suggesting minor edits (copyedits, which are small fixes to grammar and style) and translating articles between language versions.

AI-Powered Dependency Decisions Introduce, Ignore Security Bugs

infonews
securityresearch

Conntour raises $7M from General Catalyst, YC to build an AI search engine for security video systems

infonews
industry
Mar 26, 2026

Conntour is an AI-powered video search platform that uses vision-language models (AI systems trained to understand both images and text) to let security personnel search through surveillance footage using natural language queries, similar to how Google searches the web. The startup raised $7 million in funding and distinguishes itself by efficiently scaling to handle thousands of camera feeds while running on standard consumer hardware like Nvidia GPUs. The company's founders emphasize being selective about which clients they work with based on ethical and legal considerations.

Using a VPN May Subject You to NSA Spying

infonews
policy
Mar 26, 2026

Democratic lawmakers are asking the U.S. intelligence chief to clarify whether Americans using commercial VPN services (tools that route internet traffic through servers to hide a user's location) might lose constitutional privacy protections. The concern is that intelligence agencies use a default rule assuming communications of unknown origin are foreign, so Americans routed through VPN servers could be treated as non-citizens and subjected to warrantless surveillance under Section 702 of the Foreign Intelligence Surveillance Act.

Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via Any Website

highnews
security
Mar 26, 2026

A vulnerability called ShadowPrompt in Anthropic's Claude Chrome extension allowed attackers to inject malicious prompts (hidden instructions) into the AI without user interaction by exploiting two flaws: an overly permissive allowlist that trusted any subdomain matching *.claude.ai, and an XSS vulnerability (a security flaw allowing attackers to run malicious code) in an Arkose Labs CAPTCHA component. This zero-click attack could let attackers steal sensitive data, read conversation history, or perform actions like sending emails on behalf of the victim.

EU backs nude app ban and delays to landmark AI rules 

infonews
policy
Mar 26, 2026

European lawmakers voted to delay compliance deadlines for the EU AI Act, pushing back requirements for developers of high-risk AI systems (those that could seriously harm health, safety, or people's rights) until December 2027, with even later deadlines for AI used in regulated sectors like medical devices. The Parliament also backed proposals to ban nudify apps, which use AI to create fake nude images of people without consent.

Databricks pitches Lakewatch as a cheaper SIEM — but is it really?

infonews
industry
Mar 26, 2026

Databricks has introduced Lakewatch, a new open agentic SIEM (Security Information and Event Management, a tool that collects and analyzes security logs from across a system) that aims to be cheaper than traditional security tools by charging based on compute usage rather than data ingestion. While analysts agree that SIEM costs are a real problem, they caution that Lakewatch's savings may be less straightforward than promised, since costs could shift from data storage to computing power rather than disappear entirely.

Creator of AI actor Tilly Norwood says she received death threats over project

infonews
safetyindustry

OpenAI shelves erotic chatbot ‘indefinitely’

infonews
policysafety

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

infonews
policy
Mar 26, 2026

The Trump administration issued an executive order that prevents states from regulating AI by threatening to sue them and cut their funding, which supports tech industry interests but goes against what voters want. Polls show over 70% of voters favor state and federal regulation of AI, yet the administration sided with industry lobbyists instead, creating a major political divide ahead of midterm elections. Local communities across the country are already resisting AI datacenters due to environmental and energy concerns, with both progressive and Trump-supporting voters working together against the development.

Alleged RedLine Malware Administrator Extradited to US

infonews
security
Mar 26, 2026

A person named Hambardzum Minasyan from Armenia has been extradited to the US and accused of developing and managing RedLine, an infostealer malware (malicious software that steals sensitive information like passwords and personal data from infected computers).

Previous2 / 61Next
CNBC Technology
Mar 26, 2026

Financial institutions deploying agentic AI (autonomous AI systems that make decisions and take actions independently) must add AI-specific security controls beyond traditional frameworks like ISO 27001 and NIST, because these systems' autonomous nature and non-deterministic behavior introduce unique risks. The source recommends two critical capabilities: comprehensive observability (clear visibility into what AI agents do and why) and fine-grained access controls (limiting what tools and actions each agent can use), supported by seven design principles including human-AI security homology (applying human oversight rules to AI agents) and modular agent workflow architecture.

Fix: The source provides design principles and implementation guidance rather than explicit patches or updates. It recommends: (1) implementing agent identities with role and attribute-based permissions; (2) adding logging and behavioral monitoring; (3) requiring supervision for critical actions; (4) defining agent scope in workflows; (5) applying segregation of agent duties; (6) using maker-checker verification (where one agent proposes an action and another verifies it); and (7) implementing change and incident management. The source also advises to 'consult with your compliance and legal teams to determine specific requirements for your situation' and notes that 'regulatory requirements establish minimum baselines, but organizational risk considerations often require additional controls.'

AWS Security Blog
The Verge (AI)
The Verge (AI)

Fix: System administrators should upgrade to Langflow version 1.9.0 or later, which addresses the vulnerability. Alternatively, administrators can disable or restrict the vulnerable endpoint. Endor Labs additionally recommends not exposing Langflow directly to the internet, monitoring outbound traffic, and rotating API keys, database credentials, and cloud secrets if suspicious activity is detected.

BleepingComputer
Mar 26, 2026

Shadow AI refers to AI tools that employees use without approval from their organization, whether these are standalone tools or AI features embedded in existing software that weren't clearly communicated. CISOs (chief information security officers, the executives responsible for an organization's security) need to assess the risks these tools pose, understand why employees are using them, and decide whether to block them or bring them into official company use.

Fix: The source describes a response approach rather than a technical fix: CISOs should (1) assess the specific risk by examining data sensitivity, how the AI provider handles data, and whether a breach occurred, (2) understand why employees are using shadow AI and educate them on risks, (3) check if the organization already has approved tools that meet the same needs, and (4) redirect employees to approved alternatives "with a serious reminder" of approval requirements. The source also notes that organizations with slow AI adoption tend to see more shadow AI use, suggesting faster official adoption may reduce instances.

CSO Online
The Verge (AI)
DeepMind Safety Research
The Verge (AI)
Mar 26, 2026

AI models frequently make errors or hallucinate (generate false or inaccurate information) when recommending which software versions to use, how to upgrade systems, or which security fixes to apply, which can create significant technical debt (accumulated costs from shortcuts and poor decisions that must eventually be addressed). These mistakes can lead developers to ignore real security bugs or choose problematic upgrade paths.

Dark Reading
TechCrunch (Security)
Wired (Security)

Fix: Anthropic deployed a patch to the Chrome extension (version 1.0.41) that enforces a strict origin check requiring an exact match to the domain 'claude.ai' rather than accepting any subdomain. Additionally, Arkose Labs fixed the underlying XSS flaw as of February 19, 2026.

The Hacker News
The Verge (AI)
CSO Online
Mar 26, 2026

Eline van der Velden created an AI actor called Tilly Norwood (a digital twin, or an AI-generated copy of a person) and received death threats following global backlash against the project. Van der Velden stated she developed it to spark discussion about AI's impact on entertainment, but the reaction from Hollywood actors and unions was more severe than expected.

The Guardian Technology
Mar 26, 2026

OpenAI has indefinitely paused plans to release an 'adult mode' for ChatGPT, a sexualized chatbot feature that faced criticism from employees and investors over potential harms to society. This decision is part of a broader company refocus on core products, following similar discontinuations like the text-to-video platform Sora.

The Verge (AI)
Schneier on Security
SecurityWeek