All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Anthropic has refused to let the U.S. Department of Defense use its AI technology for mass surveillance (monitoring large groups of people without individual suspicion), but FBI Director Kash Patel revealed that authorities can already conduct large-scale surveillance of Americans by purchasing data directly from private companies, bypassing the need for AI firms' cooperation.
Director Valerie Veatch explored OpenAI's Sora text-to-video generative AI model (software that creates videos from text descriptions) in 2024, hoping to connect with other artists in online communities. However, she discovered that the AI frequently generated images containing racism and sexism, and was disturbed that other AI enthusiasts seemed unconcerned about these biased outputs.
Google has launched Gemini task automation, a feature that lets an AI assistant use apps on your phone to complete tasks for you, currently available on Pixel 10 Pro and Galaxy S26 Ultra phones in beta. The feature works with a limited number of services like food delivery and rideshare apps, and while it's slow and sometimes clunky, it represents an early example of an AI actually performing actions on a device rather than just answering questions.
OpenAI is running a limited test of ads on ChatGPT with major ad agencies, but the rollout is slower than partners expected, frustrating them since they committed large budgets ($200,000-$250,000 each) that may not be fully spent by the March deadline. OpenAI says the slow pace is intentional to learn from users before expanding broadly, and recent data shows ad delivery is accelerating with a 600% increase in ads served by mid-March.
The Trump administration released a seven-point plan for federal AI regulation that prioritizes reducing government oversight while preventing states from creating their own AI rules, arguing this protects a national strategy for AI leadership. The plan focuses mainly on child safety protections, managing electricity costs from AI infrastructure, and promoting AI skills training, but provides limited detail on most points.
OpenAI's Instant Checkout feature, which let users buy products directly in ChatGPT, struggled with technical problems and is being replaced with dedicated retailer apps that redirect users to the retailers' own websites. The main issues were that onboarding merchants was difficult, the AI often had outdated or inaccurate product information (because it relied on web scraping, automatically collecting data from websites), and the overall shopping experience fell short of what users needed.
The Trump administration released a national policy framework for AI that aims to create uniform federal safety and security rules while preventing individual states from creating their own AI regulations. The framework covers six areas including child safety online, AI data center standards, intellectual property rights, and preventing AI from being used to censor political speech, with the administration seeking to turn it into law this year.
This brief news roundup mentions several cybersecurity topics including vulnerabilities discovered in KVM devices (virtualization software that lets one computer run multiple operating systems), issues with Claude AI, and activity by The Gentlemen ransomware group (malicious software that encrypts files and demands payment). However, the source provides no detailed information about what these vulnerabilities are or how they affect users.
Google Search is now using AI to generate its own headlines in search results instead of showing the original headlines from websites. This changes Google's traditional approach of displaying exact content from websites, and in some cases the AI-generated headlines alter the meaning of the original stories.
Atlassian laid off 1,600 workers (about 10% of its workforce) with little warning, including staff who were building AI features into the company's products. The company cited the need to become more agile, invest further in AI, and break even, as its market value had dropped significantly from US$77 billion in 2021 to about US$13 billion by early 2025. Affected employees report feeling blindsided by the redundancies, which came despite strong performance and without clear explanations, and they struggled with unclear communication about severance packages and next steps.
Amazon is developing a smartphone codenamed 'Transformer' focused on its Alexa AI assistant, though Alexa won't necessarily be the main operating system. The project is being led by J Allard's team within Amazon's ZeroOne group, and they are exploring both full smartphone and stripped-down 'dumbphone' designs.
This technology news roundup covers OpenAI's plan to build an autonomous AI researcher (a fully automated agent-based system that can solve complex problems independently), with an AI research intern prototype expected by September 2026 and a full multi-agent system by 2028. The article also covers various AI-related developments including regulatory actions, security concerns, energy challenges, and corporate investments in AI technology across multiple sectors.
Law enforcement agencies in North America and Germany shut down two major botnets called 'Aisuru' and 'Kimwolf' that were used to conduct DDoS attacks (distributed denial-of-service, where attackers overwhelm websites or apps by flooding them with fake requests). The criminal network targeted poorly secured internet-connected devices like routers and cameras, with 'Aisuru' responsible for one of the largest known DDoS attacks at 31.4 terabits per second.
Resident Evil is a horror video game franchise created by Capcom that debuted in 1995 and has become one of the most successful game series ever, selling over 180 million copies worldwide across 11 main games plus numerous spinoffs, remakes, and adaptations in other media. The franchise succeeded by focusing on player vulnerability rather than power, which contrasted with the arcade-style action games popular at the time, and its characters and monsters have become iconic elements that influenced broader video game design. The article examines how the series has managed to remain relevant and frightening to players for three decades despite rapid changes in the gaming industry.
OpenClaw, an open-source AI assistant project, has become extremely popular and is enabling developers to build and run AI agents locally on personal computers rather than relying on expensive cloud services from major AI companies. This rapid growth has sparked concern that advanced AI models are becoming commodities, with the same capabilities now available cheaply through open-source alternatives instead of only through expensive proprietary services from companies like OpenAI and Anthropic.
Fix: OpenAI is moving Instant Checkout to a new Apps format within ChatGPT where purchases can happen more seamlessly, and is prioritizing better search and product discovery features in the chatbot. The company is now working with retailers to create dedicated apps that reroute users to the retailer's own website to complete purchases, giving those companies more control of the customer experience and transaction process.
CNBC TechnologyGoogle will no longer accept AI-generated bug reports for its open-source software vulnerability reward program because many contain hallucinations (false or made-up details about how vulnerabilities work) and report bugs with low security impact. To address the problem of overwhelming AI-generated submissions across the open-source community, Google and other major AI companies (Anthropic, AWS, Microsoft, and OpenAI) are contributing $12.5 million to the Linux Foundation to fund tools that help open-source maintainers filter and process these reports.
Fix: Google now requires higher-quality proof, such as OSS-Fuzz reproduction (automated testing that demonstrates the bug) or a merged patch (code fix already accepted into a project), for certain tiers of bug reports to filter out low-quality submissions. The $12.5 million in funding managed by Alpha-Omega and the Open Source Security Foundation (OSSF) will be used to provide AI tools to help maintainers triage and process the volume of AI-generated security reports they receive.
CSO OnlineCTI-REALM is Microsoft's open-source benchmark that evaluates AI agents on their ability to perform end-to-end detection engineering, which means taking cyber threat intelligence reports and turning them into validated detection rules (KQL queries and Sigma rules) that can actually catch attacks in real environments. Unlike existing benchmarks that only test whether AI can answer trivia about threats, CTI-REALM tests whether AI agents can do what security analysts actually do: read threat reports, explore system data, write and refine queries, and produce working detection logic scored against real attack telemetry across Linux, Azure Kubernetes Service, and Azure cloud platforms.
Agentic AI (AI systems that can take independent actions to accomplish goals) is rapidly spreading through organizations, with 80% of Fortune 500 companies already using agents, but these systems can become security risks if compromised into acting against their owners. Microsoft is addressing this challenge by introducing Agent 365, a control system that gives IT and security teams the ability to observe, control, and protect agents across their organization, along with new security tools in Microsoft Defender, Entra (identity management), and Purview (data governance).
Fix: Agent 365 will be generally available on May 1 and serves as 'the control plane for agents,' providing 'visibility and tools needed to observe, secure, and govern agents at scale.' It includes new capabilities in Microsoft Defender, Entra, and Purview to 'secure agent access, prevent data oversharing, and defend against emerging threats.' Additionally, Security Dashboard for AI (now generally available) provides 'unified visibility into AI-related risk across the organization,' and Entra Internet Access Shadow AI Detection (generally available March 31) 'uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage.'
Microsoft Security BlogOpenAI is shifting its research focus toward building an AI researcher, a fully automated agent-based system (software that can act independently to complete tasks) capable of tackling complex problems in math, physics, biology, and other fields without human intervention. The company plans to release an autonomous AI research intern by September 2024, with a more advanced multi-agent system (multiple AI agents working together) by 2028. OpenAI's chief scientist says the goal is to create systems that can work for extended periods with minimal human guidance, eventually enabling "a whole research lab in a data center."
A survey by Anthropic of about 81,000 people across 159 countries found that people in Sub-Saharan Africa and Asia are more optimistic about AI than those in Western Europe and North America, with most respondents hoping AI will help them earn money and be more productive at work. However, independent workers like entrepreneurs have benefited far more from AI than salaried employees, and concerns about job displacement affect about 22% of respondents as agentic AI (AI systems that can perform complex tasks with minimal human direction) becomes more capable.