All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Anthropic has refused to let the U.S. Department of Defense use its AI technology for mass surveillance (monitoring large groups of people without individual suspicion), but FBI Director Kash Patel revealed that authorities can already conduct large-scale surveillance of Americans by purchasing data directly from private companies, bypassing the need for AI firms' cooperation.
Director Valerie Veatch explored OpenAI's Sora text-to-video generative AI model (software that creates videos from text descriptions) in 2024, hoping to connect with other artists in online communities. However, she discovered that the AI frequently generated images containing racism and sexism, and was disturbed that other AI enthusiasts seemed unconcerned about these biased outputs.
Google has launched Gemini task automation, a feature that lets an AI assistant use apps on your phone to complete tasks for you, currently available on Pixel 10 Pro and Galaxy S26 Ultra phones in beta. The feature works with a limited number of services like food delivery and rideshare apps, and while it's slow and sometimes clunky, it represents an early example of an AI actually performing actions on a device rather than just answering questions.
A GitHub Actions workflow in the Zen-AI-Pentest repository has a shell injection (a vulnerability where attackers trick a system into running unintended commands by inserting malicious code into input fields) vulnerability in the ZenClaw Discord Integration. An attacker can craft a malicious issue title containing shell commands that execute with access to secrets, allowing them to steal the Discord webhook URL (a special link that allows posting messages to Discord) and send fake messages to the Discord channel without needing repository permissions.
OpenAI is running a limited test of ads on ChatGPT with major ad agencies, but the rollout is slower than partners expected, frustrating them since they committed large budgets ($200,000-$250,000 each) that may not be fully spent by the March deadline. OpenAI says the slow pace is intentional to learn from users before expanding broadly, and recent data shows ad delivery is accelerating with a 600% increase in ads served by mid-March.
Langflow's /profile_pictures/{folder_name}/{file_name} endpoint has a path traversal vulnerability (a flaw where attackers use ../ sequences to access files outside the intended directory). The folder_name and file_name parameters aren't properly validated, allowing attackers to read the secret_key file across directories. Since the secret_key is used for JWT authentication (a token system that verifies who you are), an attacker can forge login tokens and gain unauthorized access to the system.
The h3 library's EventStream class fails to remove carriage return characters (`\r`, a line break in the Server-Sent Events protocol) from `data` and `comment` fields, allowing attackers to inject fake events or split a single message into multiple events that browsers parse separately. This bypasses a previous fix that only removed newline characters (`\n`).
etcd (a distributed key-value store used in systems like Kubernetes) has multiple authorization bypass vulnerabilities that let unauthorized users call sensitive functions like MemberList, Alarm, Lease APIs, and compaction when the gRPC API (a communication protocol for remote procedure calls) is exposed to untrusted clients. These vulnerabilities are patched in etcd versions 3.6.9, 3.5.28, and 3.4.42, and typical Kubernetes deployments are not affected because Kubernetes handles authentication separately.
Langflow has a vulnerability where the image download endpoint (`/api/v1/files/images/{flow_id}/{file_name}`) allows anyone to download images without logging in or proving they own the image (an IDOR, or insecure direct object reference, where attackers access resources by manipulating identifiers). An attacker who knows a flow ID and filename can retrieve private images from any user, potentially exposing sensitive data in multi-tenant setups (systems serving multiple separate customers).
The Trump administration released a seven-point plan for federal AI regulation that prioritizes reducing government oversight while preventing states from creating their own AI rules, arguing this protects a national strategy for AI leadership. The plan focuses mainly on child safety protections, managing electricity costs from AI infrastructure, and promoting AI skills training, but provides limited detail on most points.
OpenAI's Instant Checkout feature, which let users buy products directly in ChatGPT, struggled with technical problems and is being replaced with dedicated retailer apps that redirect users to the retailers' own websites. The main issues were that onboarding merchants was difficult, the AI often had outdated or inaccurate product information (because it relied on web scraping, automatically collecting data from websites), and the overall shopping experience fell short of what users needed.
The Trump administration released a national policy framework for AI that aims to create uniform federal safety and security rules while preventing individual states from creating their own AI regulations. The framework covers six areas including child safety online, AI data center standards, intellectual property rights, and preventing AI from being used to censor political speech, with the administration seeking to turn it into law this year.
This brief news roundup mentions several cybersecurity topics including vulnerabilities discovered in KVM devices (virtualization software that lets one computer run multiple operating systems), issues with Claude AI, and activity by The Gentlemen ransomware group (malicious software that encrypts files and demands payment). However, the source provides no detailed information about what these vulnerabilities are or how they affect users.
Google Search is now using AI to generate its own headlines in search results instead of showing the original headlines from websites. This changes Google's traditional approach of displaying exact content from websites, and in some cases the AI-generated headlines alter the meaning of the original stories.
Atlassian laid off 1,600 workers (about 10% of its workforce) with little warning, including staff who were building AI features into the company's products. The company cited the need to become more agile, invest further in AI, and break even, as its market value had dropped significantly from US$77 billion in 2021 to about US$13 billion by early 2025. Affected employees report feeling blindsided by the redundancies, which came despite strong performance and without clear explanations, and they struggled with unclear communication about severance packages and next steps.
OpenClaw, an open-source AI assistant project, has become extremely popular and is enabling developers to build and run AI agents locally on personal computers rather than relying on expensive cloud services from major AI companies. This rapid growth has sparked concern that advanced AI models are becoming commodities, with the same capabilities now available cheaply through open-source alternatives instead of only through expensive proprietary services from companies like OpenAI and Anthropic.
Agentic AI (AI systems that can independently take actions) is expected to handle 15-25% of e-commerce by 2030, but this growth creates security risks for retailers. Threat actors may exploit AI agents to commit fraud such as gift card theft and returns fraud, with estimates suggesting one in four data breaches by 2028 could involve AI agent exploitation. Google has introduced the Universal Commerce Protocol (UCP), an open standard designed to enable secure payments between AI agents and retail systems, though the article emphasizes that defending against AI-enabled fraud remains a critical challenge for organizations.
Fix: Pass all user-controlled event fields as environment variables and reference them via shell variables in the `run` block. Never use `${{ }}` expressions inside `run` blocks.
GitHub Advisory DatabaseFix: Upgrade to etcd 3.6.9, etcd 3.5.28, or etcd 3.4.42. If upgrading is not immediately possible, restrict network access to etcd server ports so only trusted components can connect, and require strong client identity at the transport layer such as mTLS (mutual TLS, where both client and server verify each other's identity) with tightly scoped client certificate distribution.
GitHub Advisory DatabaseFix: OpenAI is moving Instant Checkout to a new Apps format within ChatGPT where purchases can happen more seamlessly, and is prioritizing better search and product discovery features in the chatbot. The company is now working with retailers to create dedicated apps that reroute users to the retailer's own website to complete purchases, giving those companies more control of the customer experience and transaction process.
CNBC TechnologyGoogle will no longer accept AI-generated bug reports for its open-source software vulnerability reward program because many contain hallucinations (false or made-up details about how vulnerabilities work) and report bugs with low security impact. To address the problem of overwhelming AI-generated submissions across the open-source community, Google and other major AI companies (Anthropic, AWS, Microsoft, and OpenAI) are contributing $12.5 million to the Linux Foundation to fund tools that help open-source maintainers filter and process these reports.
Fix: Google now requires higher-quality proof, such as OSS-Fuzz reproduction (automated testing that demonstrates the bug) or a merged patch (code fix already accepted into a project), for certain tiers of bug reports to filter out low-quality submissions. The $12.5 million in funding managed by Alpha-Omega and the Open Source Security Foundation (OSSF) will be used to provide AI tools to help maintainers triage and process the volume of AI-generated security reports they receive.
CSO OnlineCTI-REALM is Microsoft's open-source benchmark that evaluates AI agents on their ability to perform end-to-end detection engineering, which means taking cyber threat intelligence reports and turning them into validated detection rules (KQL queries and Sigma rules) that can actually catch attacks in real environments. Unlike existing benchmarks that only test whether AI can answer trivia about threats, CTI-REALM tests whether AI agents can do what security analysts actually do: read threat reports, explore system data, write and refine queries, and produce working detection logic scored against real attack telemetry across Linux, Azure Kubernetes Service, and Azure cloud platforms.
Agentic AI (AI systems that can take independent actions to accomplish goals) is rapidly spreading through organizations, with 80% of Fortune 500 companies already using agents, but these systems can become security risks if compromised into acting against their owners. Microsoft is addressing this challenge by introducing Agent 365, a control system that gives IT and security teams the ability to observe, control, and protect agents across their organization, along with new security tools in Microsoft Defender, Entra (identity management), and Purview (data governance).
Fix: Agent 365 will be generally available on May 1 and serves as 'the control plane for agents,' providing 'visibility and tools needed to observe, secure, and govern agents at scale.' It includes new capabilities in Microsoft Defender, Entra, and Purview to 'secure agent access, prevent data oversharing, and defend against emerging threats.' Additionally, Security Dashboard for AI (now generally available) provides 'unified visibility into AI-related risk across the organization,' and Entra Internet Access Shadow AI Detection (generally available March 31) 'uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage.'
Microsoft Security Blog