aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 92/371
VIEW ALL
01

GHSA-f23m-r3pf-42rh: lodash vulnerable to Prototype Pollution via array path bypass in `_.unset` and `_.omit`

security
Apr 1, 2026

Lodash versions 4.17.23 and earlier have a vulnerability in the `_.unset` and `_.omit` functions that allows prototype pollution (modifying built-in object templates like Object.prototype that affect all objects). An attacker can bypass the previous security fix by using array-wrapped path segments to delete properties from these core prototypes, though they cannot change how those prototypes work.

Fix: Upgrade to Lodash version 4.18.0 or later. The source states: 'This issue is patched in 4.18.0.'

GitHub Advisory Database
02

GHSA-q56x-g2fj-4rj6: ONNX: TOCTOU arbitrary file read/write in save_external_dat

security
Apr 1, 2026

ONNX's `save_external_data` method contains a TOCTOU vulnerability (time-of-check-time-of-use, a gap between checking if a file exists and using it) that allows attackers to overwrite arbitrary files by creating symlinks (shortcuts to other files) between those two operations. The code also has a potential path validation bypass on Windows systems that may allow absolute paths to be used.

GitHub Advisory Database
03

GHSA-44c2-3rw4-5gvh: PraisonAI Has SSRF in FileTools.download_file() via Unvalidated URL

security
Apr 1, 2026

PraisonAI's `FileTools.download_file()` function has a security flaw called SSRF (server-side request forgery, where a server is tricked into making requests to unintended targets) because it doesn't validate URLs before downloading files. An attacker can make it download from internal services or cloud metadata endpoints, potentially stealing credentials or accessing restricted information.

Fix: The source text provides a suggested fix that validates URLs by checking that the scheme is http or https, and blocking requests to private/reserved IP ranges (127.0.0.0/8, 169.254.0.0/16, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) using the `urllib.parse` and `ipaddress` Python modules. The fix includes a `_validate_url()` function that raises a ValueError if a blocked address is detected. Additionally, the code should be updated to call this validation function before passing the URL to `httpx.stream()`, and `follow_redirects=True` should be reconsidered to prevent redirect-based bypasses.

GitHub Advisory Database
04

GHSA-r4f2-3m54-pp7q: PraisonAI Has Sandbox Escape via shell=True and Bypassable Blocklist in SubprocessSandbox

security
Apr 1, 2026

PraisonAI's SubprocessSandbox has a critical security flaw where it uses `shell=True` (a setting that makes subprocess execute commands through a shell) and only blocks certain command names, but doesn't block `sh` or `bash` executables, allowing attackers to escape the sandbox by running commands like `sh -c '<command>'` even in STRICT mode. This means security protections meant to isolate untrusted AI code can be bypassed, giving attackers access to the network, files, and system information.

Fix: Replace the `subprocess.run()` call with `shlex.split(command)` (a function that safely parses command strings) and set `shell=False` to disable shell interpretation. Specifically, change from `subprocess.run(command, shell=True, ...)` to `subprocess.run(shlex.split(command), shell=False, cwd=cwd, env=env, capture_output=capture_output, text=True, timeout=timeout)`.

GitHub Advisory Database
05

GHSA-x6m9-gxvr-7jpv: PraisonAI: SSRF via Unvalidated api_base in passthrough() Fallback

security
Apr 1, 2026

PraisonAI's `passthrough()` function accepts a user-controlled `api_base` parameter (the server address to send requests to) and uses it without validation when the primary request method fails. This allows an attacker to make the server send requests to any address it can reach, including internal services like cloud metadata servers that contain sensitive credentials, a vulnerability called SSRF (server-side request forgery, where an attacker tricks a server into requesting internal resources). The flaw affects PraisonAI version 1.5.87 and potentially others.

GitHub Advisory Database
06

GHSA-w37c-qqfp-c67f: PraisonAI: Shell Injection in run_python() via Unescaped $() Substitution

security
Apr 1, 2026

PraisonAI's `run_python()` function has a shell injection vulnerability (a security flaw where attackers can sneak in operating system commands) because it doesn't properly escape shell metacharacters like `$()` and backticks when building commands. An attacker can inject arbitrary OS commands by embedding `$()` in code passed to the function, leading to full command execution on the system.

GitHub Advisory Database
07

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

security
Apr 1, 2026

The `execute_code()` function in PraisonAI uses a sandbox to restrict what Python code can do, but attackers can bypass all three security layers by creating a custom `str` subclass (a modified version of the string type) with an overridden `startswith()` method, allowing them to run arbitrary OS commands on the host system. This is especially dangerous because many deployments auto-approve code execution without human review, so an attacker could trigger the vulnerability silently through indirect prompt injection (sneaking malicious instructions into the AI's input).

GitHub Advisory Database
08

datasette-llm 0.1a6

industry
Apr 1, 2026

datasette-llm 0.1a6 is a plugin (add-on software) that helps integrate LLMs into the datasette data tool. This release simplifies configuration by automatically adding a default model to the allowed models list, so developers don't have to list the same model ID twice.

Simon Willison's Weblog
09

datasette-enrichments-llm 0.2a1

industry
Apr 1, 2026

This is an announcement about datasette-enrichments-llm version 0.2a1, a tool that combines datasette (a database publishing platform), llm (a language model interface), and enrichments (adding extra data to existing information). The post is from Simon Willison dated April 1st, 2026, and appears to be part of a monthly briefing about LLM developments.

Simon Willison's Weblog
10

Claude Mythos Wake-Up Call: What AI Vulnerability Discovery Means for Cyber Defense

securitysafety
Apr 1, 2026

Anthropic was developing Claude Mythos, an advanced AI model with improved abilities in vulnerability discovery (finding weaknesses in software) and exploit development (creating tools to attack those weaknesses). This capability means AI can now help attackers find and exploit security flaws more quickly and at larger scale than before, making cyber defense significantly more challenging.

Check Point Research
Prev1...9091929394...371Next