aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 96/371
VIEW ALL
01

OpenAI, parent firm of ChatGPT, closes $122bn funding round amid AI boom

industry
Mar 31, 2026

OpenAI, the company behind ChatGPT, completed a $122 billion funding round and reached a valuation of $852 billion, making it one of the world's most valuable private companies. The funding came from major tech companies like Amazon, Nvidia, and SoftBank, along with individual investors, and reflects the rapid growth in the AI industry.

The Guardian Technology
02

Claude AI finds Vim, Emacs RCE bugs that trigger on file open

security
Mar 31, 2026

Claude AI helped discover remote code execution (RCE, where attackers can run commands on a system they don't own) vulnerabilities in Vim and GNU Emacs text editors that trigger simply by opening a malicious file. In Vim, the issue involved improper security checks in modeline handling (special instructions at the start of a file), while in GNU Emacs, the vulnerability exploits automatic Git operations that run user-defined programs from untrusted configuration files.

Fix: For Vim: A patch was released in version 9.2.0272 that addresses the vulnerability (all versions 9.2.0271 and earlier are affected). For GNU Emacs: The maintainers have not patched the issue, but the researcher suggested that GNU Emacs could modify Git calls to explicitly block 'core.fsmonitor' to prevent dangerous scripts from executing automatically. Until a patch is released, users are advised to exercise caution when opening files from unknown sources or downloaded online.

BleepingComputer
03

datasette-llm 0.1a4

industry
Mar 31, 2026

This is a brief announcement about datasette-llm version 0.1a4, posted by Simon Willison on March 31, 2026. The content primarily promotes a monthly sponsorship option for curated LLM (large language model) news digests rather than discussing technical details, vulnerabilities, or features of the software itself.

Simon Willison's Weblog
04

OpenAI closes record-breaking $122 billion funding round as anticipation builds for IPO

industry
Mar 31, 2026

OpenAI closed a record $122 billion funding round, valuing the company at $852 billion, with major investors including SoftBank, Amazon, and Nvidia. The company, which launched ChatGPT in 2022, now has over 900 million weekly active users and generates $2 billion in monthly revenue, though it is not yet profitable. OpenAI is preparing for a potential IPO while reducing spending on certain projects like its video app Sora.

CNBC Technology
05

You can now use ChatGPT with Apple’s CarPlay

industry
Mar 31, 2026

ChatGPT is now available on Apple's CarPlay (Apple's in-car interface) if you have iOS 26.4 or newer and the latest ChatGPT app version. Users can only interact with ChatGPT through voice commands on CarPlay, not text, because Apple's guidelines restrict apps from displaying text or images as responses on the platform.

The Verge (AI)
06

Anthropic leaks part of Claude Code's internal source code

security
Mar 31, 2026

Anthropic, a major AI company, accidentally leaked part of the internal source code for Claude Code, its popular coding assistant tool, due to a packaging error. The company confirmed no customer data or credentials were exposed, but the leak could help competitors understand how the tool was built. Anthropic stated it is rolling out measures to prevent this from happening again.

Fix: Anthropic spokesperson stated: "We're rolling out measures to prevent this from happening again." However, no specific technical measures, patches, or implementation details are described in the source text.

CNBC Technology
07

llm-all-models-async 0.1

industry
Mar 31, 2026

The llm-all-models-async 0.1 plugin allows synchronous (blocking) AI models from LLM plugins to work as asynchronous (non-blocking) models by running them in a thread pool (a group of worker threads that handle tasks in parallel). This solves a compatibility problem where Datasette, which only supports async models, couldn't use sync-only plugins like llm-mrchatterbox.

Simon Willison's Weblog
08

Attackers trojanize Axios HTTP library in highest-impact npm supply chain attack

security
Mar 31, 2026

Attackers compromised the npm account of Axios' lead maintainer and published malicious versions (axios@1.14.1 and axios@0.30.4) containing a remote access trojan (malware that gives attackers control over infected computers). The attack was detected within minutes and packages were removed within 2-3 hours, but the damage was significant because Axios receives roughly 100 million downloads per week and is used in 80% of cloud and code environments.

CSO Online
09

llm 0.30

industry
Mar 31, 2026

Version 0.30 of llm (a command-line tool for accessing large language models) added a new feature to its plugin system where the register_models() function can now receive an optional model_aliases parameter that shows all previously registered models and aliases from other plugins. The update also improved documentation by adding detailed explanations (docstrings) to public classes and methods.

Simon Willison's Weblog
10

Google's Vertex AI Has an Over-Privileged Problem

security
Mar 31, 2026

Researchers at Palo Alto discovered a security weakness in Google's Vertex AI (Google's cloud platform for building and running AI applications) where AI agents could be given too many permissions, allowing attackers to steal data and access restricted cloud systems. The vulnerability stems from over-privileged configurations that give AI agents more access than they actually need to do their job.

Dark Reading
Prev1...9495969798...371Next