The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Independent research. No sponsors, no paywalls, no conflicts of interest.
No new AI/LLM security issues were identified today.
Anthropic accidentally leaked nearly 2,000 internal files and 500,000 lines of code for its Claude Code AI tool due to human error, when an internal file was mistakenly included in a software update and pointed to an archive that was quickly copied to GitHub. The leaked source code spread widely on social media and became GitHub's fastest-ever downloaded repository before Anthropic issued copyright takedown requests to limit its distribution.
Fix: Anthropic issued copyright takedown requests to try to contain the code's spread.
The Guardian Technology