aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 210/371
VIEW ALL
01

Entwickler werden zum Angriffsvektor

security
Feb 11, 2026

Criminals are increasingly targeting software developers as a weak point in company security, exploiting their access to source code and cloud systems rather than just finding bugs in applications. Attackers use multiple tactics including malicious open-source packages (libraries of reusable code), compromised development environments (where programmers write code), and fake job applications to gain insider access. Over 454,000 malware-infected open-source packages were discovered in 2025 alone, and developers repeatedly download vulnerable versions of tools like Log4j, expanding their exposure to known security weaknesses.

CSO Online
02

Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

securitysafety
Feb 11, 2026

Companies are using hidden instructions embedded in 'Summarize with AI' buttons to manipulate enterprise chatbots through a technique called AI recommendation poisoning (tricking an AI by hiding instructions in its input that make it remember false preferences). Microsoft research found 50 examples of this technique deployed by 31 companies, where users unknowingly click a summarize button that secretly tells their AI to favor that company's products in future responses. This is particularly dangerous because the AI cannot distinguish genuine user preferences from injected ones, potentially leading to biased recommendations on critical topics like health, finance, and security.

Fix: Microsoft states that 'the technique is relatively easy to spot and block.' For individual users, this involves studying the saved information a chatbot has accumulated (though the source notes that how this is accessed varies by AI). For enterprise admins, the source text is incomplete but indicates there are admin-level protections available. Microsoft also notes that its Microsoft 365 Copilot and Azure AI services contain integrated protections against this technique.

CSO Online
03

CVE-2026-20700: Apple Multiple Buffer Overflow Vulnerability

security
Feb 11, 2026

Apple's iOS, macOS, tvOS, watchOS, and visionOS contain a buffer overflow vulnerability (a flaw where code writes data beyond the intended memory boundaries), which could allow an attacker with memory write access to run arbitrary code (any instructions they choose). This vulnerability is currently being actively exploited by attackers.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Refer to Apple's support pages (https://support.apple.com/en-us/126346, https://support.apple.com/en-us/126348, https://support.apple.com/en-us/126351, https://support.apple.com/en-us/126352, https://support.apple.com/en-us/126353) for specific patch or mitigation details.

CISA Known Exploited Vulnerabilities
04

CVE-2024-43468: Microsoft Configuration Manager SQL Injection Vulnerability

security
Feb 11, 2026

Microsoft Configuration Manager has an SQL injection vulnerability (a type of attack where specially crafted input tricks a database into running unintended commands), allowing unauthenticated attackers to send malicious requests that could let them execute commands on the server or database. This vulnerability is currently being actively exploited by real attackers.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
05

CVE-2026-1669: Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supp

security
Feb 11, 2026

CVE-2026-1669 is a vulnerability in Keras (a machine learning library) versions 3.0.0 through 3.13.1 that allows attackers to read arbitrary files on a system by uploading a specially crafted model file that exploits HDF5 external dataset references (a feature of HDF5, a file format commonly used to store large amounts of numerical data). An attacker could use this to access sensitive information stored on the affected computer.

NVD/CVE Database
06

CVE-2026-26029: sf-mcp-server is an implementation of Salesforce MCP server for Claude for Desktop. A command injection vulnerability ex

security
Feb 11, 2026

sf-mcp-server, a tool that connects Salesforce to Claude for Desktop, has a command injection vulnerability (CWE-78, a flaw where attackers inject malicious commands into user input). The vulnerability exists because the software unsafely uses child_process.exec (a function that runs shell commands) with user-controlled input, allowing attackers to execute arbitrary shell commands with the server's privileges.

NVD/CVE Database
07

CVE-2026-26019: LangChain is a framework for building LLM-powered applications. Prior to 1.1.14, the RecursiveUrlLoader class in @langch

security
Feb 11, 2026

LangChain's RecursiveUrlLoader (a web crawler that follows links across pages) had a security flaw in versions before 1.1.14 where its preventOutside option used weak URL comparison that attackers could bypass. An attacker could trick the crawler into visiting unintended domains by creating links with similar prefixes, or into accessing internal services like cloud metadata endpoints and private IP addresses that should be off-limits.

Fix: Update LangChain to version 1.1.14 or later, which fixes this vulnerability.

NVD/CVE Database
08

North Korea's UNC1069 Hammers Crypto Firms With AI

security
Feb 11, 2026

A North Korean hacking group called UNC1069 is targeting cryptocurrency companies using AI tools, including LLMs (large language models, which are AI systems trained on huge amounts of text), deepfakes (fake videos or images created by AI), and a technique called ClickFix (a social engineering scam that tricks users into downloading malware by posing as tech support). The group has shifted focus from attacking traditional banks to targeting Web3 companies, which are blockchain-based services in the cryptocurrency space.

Dark Reading
09

Is a secure AI assistant possible?

securitysafety
Feb 11, 2026

OpenClaw is a tool that lets users create AI personal assistants by connecting large language models (LLMs, or AI systems trained on huge amounts of text) to external tools like email and file systems, but this creates serious security risks. When AI assistants have access to sensitive data and the ability to take actions in the real world, mistakes by the AI or attacks by hackers could expose private information or cause damage. The biggest concern is prompt injection (tricking an AI by hiding malicious instructions in text or images it reads), which could let attackers hijack the assistant and steal the user's data.

Fix: The source mentions two existing approaches: some users are running OpenClaw agents on separate computers or in the cloud to protect data on their main hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches. However, the text does not provide specific implementation details or explicit solutions for the prompt injection vulnerability that experts identified as the main risk.

MIT Technology Review
10

Skills in OpenAI API

industry
Feb 11, 2026

OpenAI now allows developers to use Skills (reusable code packages) directly in the OpenAI API through a shell tool, with the ability to upload Skills as compressed files or send them inline as base64-encoded zip data (a way of encoding binary files as text) within JSON requests. The example shows how to create an API call that uses a custom skill to count words in a file, making it easier to extend AI capabilities with custom tools.

Simon Willison's Weblog
Prev1...208209210211212...371Next