All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Overseas 'content farms' based in Vietnam are using AI to create fake videos and images of UK politicians, spreading them on Facebook to go viral and potentially earn money through the platform's monetization program. The fake content, called deepfakes (digitally altered videos, pictures, or audio made to look real), depicts politicians in false situations like hospital stays or compromising scenarios, and Meta has removed some pages after investigation, though new ones continue appearing daily.
Fix: The Electoral Commission is developing software to spot and combat deepfakes ahead of the Welsh and Scottish parliaments' elections in May. Additionally, Facebook has marked some false stories with warnings from third-party fact-checkers like Full Fact, and Meta removed several Vietnam-based pages after being contacted by the BBC.
BBC TechnologyAI chip technology is advancing faster than data centers can be built, creating a financial risk for companies like Oracle that are investing heavily in infrastructure. OpenAI has decided not to expand its partnership with Oracle's Texas data center because it wants access to newer Nvidia chips rather than the older generation (Blackwell processors) that will be ready in a year, highlighting how quickly AI hardware becomes outdated. This mismatch is particularly risky for Oracle, which is funding its $100 billion expansion primarily through debt rather than using cash from existing profitable businesses like its competitors do.
Anthropic, an AI company, filed a lawsuit against the Department of Defense after being labeled a supply chain risk (a government designation suggesting a company could threaten critical systems). Nearly 40 employees from competing AI companies OpenAI and Google, including prominent figures, filed a legal support document expressing concerns about this decision and its implications for AI technology.
Attackers are running a campaign called 'InstallFix' that uses malvertising (ads serving malware) combined with ClickFix tactics (fake warning popups that trick users into taking action) to direct people to fake websites pretending to be Claude, an AI coding assistant. The attack exploits how developers use AI tools and command-line interfaces (text-based programs that run on computers) to execute code.
vLLM has a bypass in its SSRF (server-side request forgery, where an attacker tricks a server into making requests to unintended targets) protection because the validation layer and the HTTP client parse URLs differently. The validation uses urllib3, which treats backslashes as literal characters, but the actual requests use aiohttp with yarl, which interprets backslashes as part of the userinfo section. An attacker can craft a URL like `https://httpbin.org\@evil.com/` that passes validation for httpbin.org but actually connects to evil.com.
Anthropic, an AI company, sued the US government after being labeled a 'supply chain risk' (a designation meaning a company's tools are considered unsafe for government use) in retaliation for refusing to remove safety restrictions on military use of its AI tools like Claude. The company argues the government's actions violate its free speech rights and are unlawful, claiming it had been negotiating compromises with the Defense Department before the administration publicly criticized the company and directed all agencies to stop using its tools.
Anthropic launched Code Review, an AI tool that automatically checks pull requests (code change submissions for review) to catch bugs and security issues before they enter the codebase. The tool integrates with GitHub, uses multiple AI agents working in parallel to analyze code from different angles, and provides step-by-step explanations of potential problems with color-coded severity levels to help developers prioritize fixes.
Anthropic, an AI company, is suing the US Department of Defense after being labeled a 'supply chain risk' (a designation meaning the government considers the company a potential threat to national security in government contracts). The lawsuit claims this blacklisting is unlawful and violates free speech rights, stemming from a dispute over Anthropic's safety measures designed to prevent the military from using its AI models for mass surveillance or fully autonomous weapons.
Anthropic, an AI company, sued the Trump administration after being blacklisted and designated a supply chain risk (a classification usually reserved for foreign threats), which prevents the Pentagon and its contractors from using the company's AI models. The lawsuit claims the blacklist is unlawful and is causing irreparable harm by canceling government contracts and jeopardizing hundreds of millions of dollars in business. The conflict arose from disagreement over how Anthropic's AI should be used, with the Department of Defense wanting unrestricted access while Anthropic wanted safeguards against fully autonomous weapons and domestic mass surveillance.
Anthropic, a company that makes Claude (an AI assistant), is suing the Department of Defense after the agency labeled it a "supply chain risk," which prevents other companies and government agencies from using Anthropic's AI models. The conflict started because Anthropic refused to give the Pentagon unrestricted access to its technology, citing concerns about mass surveillance of Americans and fully autonomous weapons that make targeting decisions without human input. Anthropic argues the DOD's actions violate free speech protections in the Constitution.
X has added a toggle in its iOS app that claims to block Grok (an AI chatbot) from editing your photos, but the feature has a major limitation. According to the fine print, it only prevents users from tagging @Grok in replies to your images on X, rather than actually stopping Grok from editing your photos.
Microsoft is launching a new premium Office subscription tier called Microsoft 365 E7 at $99 per user per month (65% more expensive than the current E5 tier) that includes Copilot (an AI assistant), identity management tools, and Agent 365 (software for managing AI agents that can perform multi-step tasks). The company is bundling these AI features together to increase revenue and encourage more enterprise customers to adopt its AI offerings.
As generative AI (systems that create new content based on patterns in training data) becomes widespread across industries, organizations need specialized security tools to protect their AI infrastructure and data from cyber threats. AI Security Posture Management (AI-SPM) is a new category of security software designed to monitor, assess, and secure AI systems, complementing existing tools like CSPM (Cloud Security Posture Management, which protects cloud environments) and DSPM (Data Security Posture Management, which prevents data breaches).
More than 30 employees from OpenAI and Google DeepMind filed a court statement supporting Anthropic in a lawsuit against the U.S. Defense Department, which labeled the AI company a supply-chain risk after Anthropic refused to let the Pentagon use its technology for mass surveillance or autonomous weapons. The employees argue that the Pentagon could have simply canceled its contract with Anthropic and purchased from another AI company instead of designating it as a supply-chain risk, a label typically reserved for foreign adversaries. They contend that if the government is allowed to punish Anthropic this way, it will harm U.S. competitiveness in AI and discourage open discussion about the risks of AI systems.
The U.S. Defense Department banned Anthropic's AI models after a review by Pentagon technology leadership, designating the company a supply chain risk (a classification historically reserved for foreign adversaries) and requiring defense contractors to certify they don't use its technology. The decision surprised many officials who considered Anthropic's models superior and had deployed them in classified military networks, and defense experts worry it sets a troubling precedent while removing a trusted AI vendor that military personnel relied on.
Fix: Anthropic's Code Review tool is the solution presented in the source. It integrates with GitHub and automatically analyzes pull requests, leaving comments on code explaining potential issues and suggested fixes. Engineering leads can enable it to run by default for all team members. The tool focuses on logical errors (not style issues), uses color-coded severity labels (red for highest severity, yellow for potential problems, purple for issues tied to preexisting code), and provides a light security analysis. Additional customized checks can be configured based on internal best practices, with deeper security analysis available through Claude Code Security.
TechCrunchOpenAI is acquiring Promptfoo, a cybersecurity startup that provides tools to test and secure AI systems, particularly as AI agents (autonomous programs that can take actions) become more connected to real data and systems. Promptfoo's security tools will be integrated into OpenAI's Frontier platform, and OpenAI will continue supporting Promptfoo's open-source project that helps developers test different AI prompts and compare large language models (AI systems trained on massive amounts of text data).
OpenAI acquired Promptfoo, an AI security startup, to integrate its technology into OpenAI's enterprise platform for protecting AI agents from attacks. Promptfoo develops tools that help companies test security vulnerabilities in LLMs (large language models, the AI systems behind chatbots), addressing growing concerns that autonomous AI agents could be exploited to steal data or manipulate systems.
Fix: According to the source, Promptfoo's technology will be integrated into OpenAI Frontier to perform automated red-teaming (simulated attacks to find weaknesses), evaluate AI workflows for security concerns, and monitor activities for risks and compliance needs. OpenAI also stated it expects to continue building out Promptfoo's open source offering.
TechCrunch (Security)Anthropic, a major AI company, is suing the US Department of Defense after being labeled a supply-chain risk (a company whose products or services might pose security threats if compromised). The lawsuit claims the Trump administration retaliated against Anthropic for refusing to remove safety restrictions on its AI systems, particularly regarding mass surveillance and fully autonomous weapons (systems that make lethal decisions without human involvement).
Current US laws have not kept pace with AI capabilities, creating legal ambiguity around whether the government can conduct mass surveillance on Americans using AI systems. A dispute between the Department of Defense and AI company Anthropic has exposed this gap, with the White House responding by issuing new guidelines requiring AI companies to allow 'any lawful' use of their models, though questions about what is actually lawful remain unanswered.
Microsoft Agent 365 is a unified control plane (a centralized management system) designed to help organizations track, monitor, and secure agentic AI (AI systems that can independently take actions to accomplish goals). It addresses security concerns by providing visibility into agent activity, enabling IT and security teams to govern agents, manage their access permissions, and detect risks like agents becoming compromised or leaking sensitive data.
Fix: Microsoft Agent 365 provides several built-in security measures: Agent Registry creates an inventory of all agents in an organization accessible through the Microsoft 365 admin center and Microsoft Defender workflows; Agent behavior and performance observability provides detailed reports and activity tracking; Agent risk signals across Microsoft Defender, Entra (Microsoft's identity management service), and Purview help security teams evaluate and block risky agent actions based on compromise detection and anomalies; Security policy templates automate policy enforcement across the organization; and Microsoft Entra capabilities enable secure management of agent access permissions to prevent unmanaged agents from accumulating excessive privileges.
Microsoft Security Blog