Your daily watch on AI and LLM security — vulnerabilities, privacy incidents, safety research, and industry developments.
Critical LLM Security Vulnerabilities Discovered: Claude Opus 4.6 discovered 500+ previously unknown high-severity vulnerabilities in major open-source libraries, while researchers revealed fundamental safety weaknesses in Mixture-of-Experts models where manipulating specific routers can bypass safety mechanisms (up to 86.3% jailbreak success), and concentration of safety behaviors in few experts enables "lobotomy" attacks that silence safety-critical components.
Multiple SSRF and Authentication Bypass Flaws in Production Systems: LangChain versions prior to 1.2.11 contain SSRF vulnerabilities in RecursiveUrlLoader and ChatOpenAI token counting (CVE-2026-26013) that allow attackers to access internal infrastructure and cloud metadata services, while OpenMetadata leaks highly-privileged JWT tokens through API calls, enabling read-only users to escalate privileges and make destructive changes.
Emerging Threats in AI Agent Security and RAG Systems: New research exposes "retrieval pivot attacks" in hybrid RAG systems where vector-retrieved content can pivot through knowledge graphs to leak cross-tenant data (RPR up to 0.95), while the AARM specification proposes runtime security controls for autonomous agents to prevent prompt injection, confused deputy attacks, and intent drift as AI systems evolve from assistants to autonomous actors.
AI Detection and Fingerprinting Systems Face Evasion Challenges: StealthRL achieves 99.9% attack success rate against AI-text detectors using reinforcement learning paraphrasing attacks, while researchers developed "refusal vector" fingerprinting achieving 100% accuracy in identifying LLM provenance across model modifications, and compositional reasoning attacks scattered across long contexts (64k tokens) successfully evade safety alignment in stronger reasoning models.
India's Deepfake Deadline and Rising Industry Investment in AI Security: India mandates social media platforms remove illegal AI-generated content and label all synthetic content by February 20th, affecting 1 billion users, while AI security startups Outtake ($40M Series B) and Zast.AI ($6M) raise significant funding to address AI-enabled fraud, impersonation attacks, and automated vulnerability detection as threats scale beyond manual human intervention.
India has mandated that social media platforms must remove illegal AI-generated content much faster and ensure all synthetic content is clearly labeled, with rules taking effect on February 20th. This gives tech companies only days to implement detection and labeling systems for deepfakes, putting immediate pressure on platforms like Instagram and X to comply in a critical market of 1 billion internet users.
The RecursiveUrlLoader class in @langchain/community had an SSRF vulnerability due to insufficient URL validation. It used String.startsWith() for URL comparison, allowing attackers to bypass the preventOutside option with domain prefix tricks (e.g., example.com.attacker.com), and had no validation against private/reserved IP addresses, enabling access to cloud metadata services and internal infrastructure.
Fix: Two changes were made: 1) The startsWith check was replaced with strict origin comparison using the URL API (new URL(link).origin === new URL(baseUrl).origin) to prevent subdomain-based bypasses. 2) A new URL validation module (@langchain/core/utils/ssrf) was introduced that blocks requests to cloud metadata endpoints (169.254.169.254, metadata.google.internal, etc.), private IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 127.0.0.0/8, etc.), IPv6 equivalents (::1, fc00::/7, fe80::/10), and non-HTTP/HTTPS schemes. As a workaround for users who cannot upgrade immediately: avoid using RecursiveUrlLoader on untrusted or user-influenced content, or run the crawler in a network environment without access to cloud metadata or internal services.
GitHub Advisory Database