Llama 4 Series Vulnerability Assessment: Scout vs. Maverick
Summary
Meta's new Llama 4 models (Scout and Maverick) were tested for security vulnerabilities using Protect AI's Recon tool, which runs 450+ attack prompts across six categories including jailbreaks (attempts to make AI ignore safety rules), prompt injection (tricking an AI by hiding instructions in its input), and evasion (using obfuscation to hide malicious requests). Both models received medium-risk scores (Scout: 58/100, Maverick: 52/100), with Scout showing particular vulnerability to jailbreak attacks at 67.3% success rate, though Maverick demonstrated better overall resilience.
Classification
Affected Vendors
Related Issues
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
CVE-2026-24747: PyTorch is a Python package that provides tensor computation. Prior to version 2.10.0, a vulnerability in PyTorch's `wei
Original source: https://protectai.com/blog/vulnerability-assessment-llama-4
First tracked: March 13, 2026 at 12:56 PM
Classified by LLM (prompt v3) · confidence: 85%