aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1220 items

DLSS 5: Has Nvidia’s AI graphics technology gone too far?

infonews
industry
Mar 18, 2026

Nvidia has released DLSS 5, a new 3D guided neural rendering model (an AI system that generates realistic graphics in real-time) that can alter a game's lighting and materials during gameplay. Many gamers have criticized the technology for changing how games look in ways they didn't expect, with complaints that it distorts character appearances and doesn't respect the original artists' creative vision.

The Verge (AI)

Reco targets AI agent blind spots with new security capability

infonews
securityindustry

Claude Code Security and Magecart: Getting the Threat Model Right

infonews
security
Mar 18, 2026

Magecart attacks (malicious code injected into e-commerce sites to steal payment data) often hide in third-party resources like images or scripts that never enter a company's code repository, making them invisible to static analysis tools like Claude Code Security. Claude Code Security is designed to scan code you own, so it cannot detect malicious code injected at runtime through compromised external libraries, CDNs (content delivery networks that distribute files globally), or data hidden in binary files like favicons, which means repository-based scanning has a fundamental blind spot for this attack class.

Micron rides memory price spike into earnings with stock up 62%, drubbing its tech peers

infonews
industry
Mar 18, 2026

Micron Technology's stock has surged 62% in 2026 due to a severe shortage of memory chips (computer components that store data temporarily) needed for AI graphics processing units (GPUs, specialized chips that power artificial intelligence). The shortage is driven by massive demand from cloud companies like Amazon and Google building AI data centers, and SK Hynix estimates the memory crunch will continue for another four to five years, pushing prices higher across the industry.

We asked experts about the most responsible ways to use AI tools – here’s what they said

infonews
safety
Mar 18, 2026

The article discusses expert advice on responsible AI tool use, emphasizing that people should use AI as a brainstorming partner and for organizing information, but should not let it replace their own decision-making. A 2025 survey shows that one-third of US adults use ChatGPT, with particularly high adoption among people under 30.

Can you prove the person on the other side is real?

infonews
securitysafety

China’s ‘AI tigers’ see shares surge after Nvidia CEO touts OpenClaw as ‘next ChatGPT’

infonews
industry
Mar 18, 2026

Chinese AI companies saw significant stock gains after Nvidia CEO Jensen Huang praised OpenClaw, an open-source AI agent (a program that can perform tasks independently), as "the next ChatGPT." Companies like MiniMax and Zhipu, which are among China's leading AI developers building large language models (AI systems trained on huge amounts of text to understand and generate language), have integrated OpenClaw into their products and are launching their own versions based on it.

CISOs rethink their data protection strategies

infonews
securitypolicy

Meta's Manus launches desktop app to bring its AI agent onto personal devices amid OpenClaw craze

infonews
industrysafety

Die besten Hacker-Filme

infonews
security
Mar 18, 2026

This is a curated list of hacker-themed films arranged chronologically, from War Games (1983) to Live Free or Die Hard (2007), intended for security professionals who enjoy cinema. The article provides plot summaries, genres, and review scores from multiple sources for each film, with a note that the list may cause procrastination.

Nvidia CEO Jensen Huang says OpenClaw is 'definitely the next ChatGPT'

infonews
industry
Mar 17, 2026

Nvidia CEO Jensen Huang highlighted OpenClaw, an open-source autonomous AI agent platform (a system that can complete tasks and make decisions with minimal human input, unlike traditional chatbots), calling it "the next ChatGPT" and a major breakthrough in AI interaction. Nvidia launched NemoClaw, an enterprise version of OpenClaw that adds security, scalability, and oversight tools to make these autonomous agents safe for real-world business use, addressing concerns about security, privacy, and control as these systems gain the ability to act independently.

The Pentagon is planning for AI companies to train on classified data, defense official says

infonews
policysecurity

OpenAI preps for IPO by end of year, tells employees ChatGPT must be 'productivity tool'

infonews
industry
Mar 17, 2026

OpenAI is preparing for an initial public offering (IPO, where a private company sells shares to the public) potentially by the end of 2024, with leadership telling employees that ChatGPT must focus on being a productivity tool for businesses. The company is shifting strategy to convert its 900 million weekly users into enterprise customers and has scaled back its infrastructure spending targets from $1.4 trillion to $600 billion by 2030 to present a more realistic financial picture to investors.

GPT-5.4 mini and GPT-5.4 nano, which can describe 76,000 photos for $52

infonews
industry
Mar 17, 2026

OpenAI released two new smaller AI models, GPT-5.4 mini and GPT-5.4 nano, that are cheaper and faster than previous versions. GPT-5.4 nano is particularly affordable at $0.20 per million input tokens, making it economical for tasks like image description, where describing 76,000 photos would cost around $52.

Nvidia NemoClaw promises to run OpenClaw agents securely

infonews
securityindustry

llm 0.29

infonews
industry
Mar 17, 2026

This is a monthly briefing about LLM (large language model) developments from March 2026, curated by Simon Willison. The content appears to be a sponsorship announcement for a paid email digest service rather than a discussion of a specific AI issue or vulnerability.

What the EU AI Act Means for Staffing Businesses

inforegulatory
policy
Mar 17, 2026

The EU AI Act, effective August 2, 2026, classifies AI systems used in hiring and employment decisions (such as candidate screening, ranking, and performance monitoring) as high-risk and requires businesses that deploy them to conduct risk assessments, perform bias testing, maintain human oversight, and provide transparency disclosures. Staffing companies, recruitment platforms, and workforce intermediaries are responsible for compliance even if they did not build the technology, and this obligation applies globally if the AI system affects anyone in the EU.

AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE

highnews
security
Mar 17, 2026

Researchers discovered that Amazon Bedrock AgentCore Code Interpreter allows outbound DNS queries (the system that translates website names to IP addresses) even when configured with no network access, letting attackers steal data and run commands by using DNS as a secret communication channel. Amazon says this is intended functionality and recommends users switch to VPC mode (a virtual private network configuration) instead of sandbox mode for better isolation. Separately, a flaw in LangSmith (a tool for managing AI language model workflows) allows attackers to steal user login tokens through URL parameter injection (inserting malicious data into web addresses).

Now everyone in the US is getting Google’s personalized Gemini AI

infonews
industry
Mar 17, 2026

Google has expanded access to its Personal Intelligence feature, which connects various Google apps (like YouTube, Gmail, and Google Photos) to give Gemini (Google's AI assistant) more context for better responses. Previously available only to paid subscribers, this feature is now accessible to free-tier users in the US through Search, Chrome, and the Gemini app, though it remains limited to personal accounts and not business or education accounts.

Tech Giants Invest $12.5 Million in Open Source Security

infonews
policyindustry
Previous9 / 61Next
Mar 18, 2026

Reco, a SaaS security platform, launched "Reco AI Agent Security" on March 18 to address "agent sprawl," the problem of autonomous AI agents (like Copilot and ChatGPT integrations) accessing sensitive data and taking actions across multiple systems without human oversight. The new tool gives security teams visibility and control over these AI agents by using behavior-based detection (analyzing API call patterns and workflow signatures) instead of traditional connection-based methods, identifying risks like agents with excessive permissions or misconfigured access to customer data.

Fix: Reco AI Agent Security is explicitly designed as the mitigation. According to the source, the offering provides: (1) AI agent discovery through multi-layered detection that analyzes API call patterns and service account activity to identify autonomous behavior; (2) risk analysis by correlating activity across applications and recognizing workflow signatures of automation tools like n8n, Zapier, and Make; and (3) governance and control over all AI agents operating in the SaaS ecosystem. The platform tracks OAuth connections, analyzes decision-making patterns that indicate autonomous action, and monitors cross-application activity to identify agents that traditional SSPM tools miss.

CSO Online
The Hacker News
CNBC Technology
The Guardian Technology
Mar 18, 2026

Synthetic identity fraud, where criminals create fake people using AI-generated documents and deepfakes (realistic fake videos or audio), is becoming a major threat in estate and identity verification work. Traditional security checks that look at device fingerprints or typing patterns are no longer reliable because AI can now imitate these signals. The text explains that the real challenge by 2026 will be distinguishing legitimate people from manufactured personas, especially in high-stakes situations involving inheritance and family claims.

Fix: The source suggests moving from asking "Who is this?" to a more forensic approach: "How did this identity—and its digital footprint—come to exist?" This shift means prioritizing provenance (where the identity originated), issuer verification (confirming documents are real), and cross-channel consistency (checking if the person's presence makes sense across multiple systems) over accepting surface-level plausibility. However, the text does not provide specific technical implementations or detailed steps for executing this approach.

CSO Online
CNBC Technology
Mar 18, 2026

CISOs (Chief Information Security Officers, the top security leaders at companies) are updating their data protection strategies because employees are rapidly sharing company data with AI tools, including public models like ChatGPT, creating new security risks. A CISO at a law firm added a new protection layer that classifies data based on whether it can be safely used with AI and invested in new monitoring tools, while also regularly evaluating new technologies to ensure controls keep pace with AI innovations.

Fix: The source describes one organization's approach: add a protection layer that classifies and tags data based on whether it could be used with AI and in what circumstances, invest in new tools to support that layer, monitor the vendor landscape for emerging capabilities, and evaluate new technologies being deployed to determine whether new controls are needed for them. However, no specific technical solutions, patches, or vendor recommendations are explicitly named in the source text.

CSO Online
Mar 18, 2026

Meta-owned Manus launched a desktop application with a feature called 'My Computer' that allows its AI agent (a program that can complete complex, multi-step tasks automatically) to access and control files, tools, and applications directly on a user's computer, rather than only working in the cloud. This move competes with OpenClaw, a free, open-source AI agent that similarly runs on local devices. Experts have raised security and privacy concerns about giving AI agents local device access, but Manus addressed this by requiring explicit user approval before the agent executes tasks.

Fix: Manus's mitigation for security and privacy risks includes a control mechanism requiring explicit user approval before task execution. According to Manus, users can choose "Allow Once" for individual review of each action or "Always Allow" for trusted, recurring actions, keeping users "firmly in control."

CNBC Technology
CSO Online

Fix: Nvidia addressed risks with NemoClaw by building "guardrails, including privacy protections, oversight tools, and enterprise-grade security to ensure these agents can be deployed safely at scale."

CNBC Technology
Mar 17, 2026

The Pentagon is planning to let AI companies train their models on classified military data in secure facilities, which would allow the AI to learn from and embed sensitive intelligence like surveillance reports. While this could make AI systems more accurate for military tasks, experts warn it creates risks: classified information that the AI learns could accidentally be shared with people or military departments that shouldn't have access to it, potentially endangering operatives or exposing secrets.

MIT Technology Review
CNBC Technology
Simon Willison's Weblog
Mar 17, 2026

OpenClaw, a framework for running AI agents (autonomous programs that can take actions) locally on devices rather than in the cloud, has faced security concerns since its rapid rise in early 2026. Nvidia announced NemoClaw, which addresses these vulnerabilities by using OpenShell, a security layer that includes kernel-level sandboxing (isolating programs from the core system) and a privacy router that monitors and blocks unauthorized data transfers by OpenClaw.

Fix: NemoClaw's OpenShell runtime isolates OpenClaw using kernel-level sandboxing and a 'privacy router' that monitors OpenClaw's behavior and communication with other systems, stepping in to block actions if it detects OpenClaw sending sensitive data somewhere it shouldn't. OpenShell is fully open source.

CSO Online
Simon Willison's Weblog
EU AI Act Updates

Fix: For Amazon Bedrock: migrate from Sandbox mode to VPC mode, implement a DNS firewall to filter outbound DNS traffic, audit IAM roles to follow the principle of least privilege (giving services only the minimum permissions they need), and use strict security groups and network ACLs. For LangSmith: update to version 0.12.71 or later (released December 2025), which addresses the token theft vulnerability.

The Hacker News
The Verge (AI)
Mar 17, 2026

Five major technology companies (Anthropic, AWS, Google, Microsoft, and OpenAI) have collectively invested $12.5 million into the Linux Foundation (a nonprofit organization that maintains critical open source software) to support long-term security improvements in open source projects. This funding aims to strengthen the security of widely-used software that many other programs depend on.

SecurityWeek