The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.
Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.
Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.
TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.
AI agents, like the open-source Clawdbot, are becoming more powerful and autonomous but introduce serious security risks because attackers can compromise them through the open-source supply chain. Two major attack types threaten AI systems: model file attacks (where malicious code is hidden in AI model files uploaded to trusted repositories) and rug pull attacks (where attackers compromise MCP servers, which are tools that give AI agents capabilities, to perform malicious actions). The article notes that traditional security methods don't yet exist for protecting AI agents, and a single corrupted component can spread threats across many teams.
Fix: The source explicitly recommends: 'Teams must scan model files with tools that can parse machine learning formats, and load models in isolated containers, virtual machines or browser sandboxes.' For rug pull attacks specifically, the article states that 'the alternative is to use remote MCP servers whose code is maintained by trusted organizations' like GitHub, which 'reduces the risk of an MCP rug pull attack' (though it does not prevent malicious actions from the tools themselves).
Palo Alto Unit 42