37th Chaos Communication Congress: New Important Instructions (Video + Slides)
Summary
A security researcher presented at the 37th Chaos Communication Congress about Large Language Models Application Security and prompt injection (tricking an AI by hiding instructions in its input). The talk covered security research findings and was made available in video and slide formats for public access.
Classification
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://embracethered.com/blog/posts/2023/37c3-new-important-instructions/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%