New tools, products, platforms, funding rounds, and company developments in AI security.
Companies are quickly adopting AI tools to improve productivity and gain business advantages, but this creates new security risks. AI tools often access sensitive company data like customer records and emails, and employees may use LLMs (large language models, AI systems trained on huge amounts of text) without approval, risking accidental leaks of confidential information.
AWS Bedrock is Amazon's platform for building AI applications that connect foundation models (pre-trained AI systems) to enterprise data and systems like Salesforce and SharePoint. Researchers discovered eight attack vectors that allow attackers to exploit this connectivity, including log manipulation (hiding their tracks in audit logs), knowledge base compromise (stealing enterprise data), agent hijacking (taking control of autonomous AI agents), and prompt poisoning (corrupting AI instructions).
AI influencers are becoming a serious commercial industry, with new awards like an 'AI Personality of the Year' contest emerging alongside AI beauty pageants and music competitions. The contest, backed by companies like OpenArt, Fanvue, and ElevenLabs, aims to recognize the creative work and growing cultural influence of AI influencers.
Starlette 1.0 was released in March 2026 with breaking changes from previous versions, notably replacing the old on_startup and on_shutdown parameters with a new lifespan mechanism (an async context manager for managing app startup and shutdown). Since LLMs were trained on older Starlette code, the author created a Skill (a custom knowledge document that Claude can reference) by having Claude clone the Starlette repository, build documentation with code examples, and add it to their Claude chat so the AI could generate working Starlette 1.0 code.
Spotify is investing heavily in AI-powered music discovery tools, including a new ChatGPT integration and a Prompted Playlist feature that let users describe what they want to hear through conversation rather than traditional buttons. Spotify executives say these AI features are key to keeping subscribers engaged as music catalogs become similar across streaming apps, with their interactive AI DJ feature already used by 90 million subscribers.
Elon Musk announced plans to build a Terafab chip manufacturing plant in Austin, Texas, jointly operated by Tesla and SpaceX to produce chips for robotics, AI, and space data centers. Musk and other industry leaders are concerned that chip makers cannot produce enough chips fast enough to meet growing demand from the AI industry, though building a chip fabrication plant requires billions of dollars, many years, and specialized equipment.
OpenAI is shifting away from its aggressive spending plans to build massive data centers, instead focusing on purchasing cloud computing capacity from other providers. CEO Sam Altman acknowledged that running data centers at this scale is difficult, citing severe weather events and supply chain challenges at their Texas facility (part of the $500 billion Stargate project with Oracle and SoftBank), and the company is facing pressure from investors to demonstrate more responsible spending before its planned IPO (initial public offering, when a private company becomes publicly traded).
At the Game Developers Conference, AI tools were heavily promoted for creating game content, NPCs (non-player characters, the computer-controlled characters in games), and automating quality assurance tasks, but these AI systems were largely absent from actual commercial games being released. The gap between AI hype in the gaming industry and its real-world implementation in finished games remains significant.
Director Valerie Veatch explored OpenAI's Sora text-to-video generative AI model (software that creates videos from text descriptions) in 2024, hoping to connect with other artists in online communities. However, she discovered that the AI frequently generated images containing racism and sexism, and was disturbed that other AI enthusiasts seemed unconcerned about these biased outputs.
Google has launched Gemini task automation, a feature that lets an AI assistant use apps on your phone to complete tasks for you, currently available on Pixel 10 Pro and Galaxy S26 Ultra phones in beta. The feature works with a limited number of services like food delivery and rideshare apps, and while it's slow and sometimes clunky, it represents an early example of an AI actually performing actions on a device rather than just answering questions.
OpenAI is running a limited test of ads on ChatGPT with major ad agencies, but the rollout is slower than partners expected, frustrating them since they committed large budgets ($200,000-$250,000 each) that may not be fully spent by the March deadline. OpenAI says the slow pace is intentional to learn from users before expanding broadly, and recent data shows ad delivery is accelerating with a 600% increase in ads served by mid-March.
The Trump administration released a seven-point plan for federal AI regulation that prioritizes reducing government oversight while preventing states from creating their own AI rules, arguing this protects a national strategy for AI leadership. The plan focuses mainly on child safety protections, managing electricity costs from AI infrastructure, and promoting AI skills training, but provides limited detail on most points.
This newsletter covers multiple AI-related developments, including animal welfare advocates exploring how artificial general intelligence (AGI, a theoretical AI system that can learn and perform any intellectual task) might reduce animal suffering, the White House unveiling a light-touch AI regulation framework, and various corporate moves like OpenAI adding ads to free ChatGPT and the Pentagon adopting Palantir's AI for military targeting. The article also discusses Elon Musk being found liable for misleading Twitter investors and a case where an Australian woman's experimental brain implant was removed against her wishes despite significantly improving her quality of life.
Senator Elizabeth Warren is questioning the Department of Defense's decision to blacklist AI company Anthropic as a "supply chain risk," calling it retaliation after the company refused to let the DOD use its AI models for fully autonomous weapons or domestic mass surveillance. Anthropic has filed a lawsuit against the Trump administration, while OpenAI has secured a DOD contract despite similar concerns from lawmakers about whether safeguards exist to prevent the technology from being used for mass surveillance or autonomous weapons.
Wiz has introduced AI agents and workflows designed to help security teams respond to threats faster by automating investigation and remediation tasks. The system uses three specialized agents—Red (finds vulnerabilities), Blue (investigates threats), and Green (fixes issues)—that work together in a continuous loop to detect, analyze, and resolve security risks at machine speed rather than relying on manual human work.
Insider threats (security risks from people inside an organization) are becoming more common and damaging, with 42% of organizations reporting increased malicious insider incidents and an average cost of $13.1 million per incident. These threats come from both intentional bad actors and careless mistakes, and are worsened by new technologies like AI agents (software that can act independently with system access), remote work, and economic pressure on employees.
Organizations deploying AI tools and agents are creating new security vulnerabilities, particularly through attacks like indirect prompt injection (tricking an AI by hiding malicious instructions in its input) and agentic tool chain attacks (compromising the sequence of tools an AI agent uses). CrowdStrike is addressing this gap by expanding its Falcon platform with new AI detection and response capabilities that monitor desktop AI applications, discover shadow AI (unauthorized AI tools), and detect threats across endpoints, cloud, and SaaS environments.
Fix: CrowdStrike Falcon AIDR is extending runtime threat detection to desktop AI applications (ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor) with visibility into prompt content and the ability to detect prompt attacks and data leaks. The capability is currently in pre-beta and will be generally available in Q2. Additionally, AI Discovery in CrowdStrike Falcon Exposure Management, now generally available, automatically discovers AI-related components running on endpoints in real time, including AI apps, agents, LLM (large language model) runtimes, MCP (Model Context Protocol) servers, and IDE extensions.
CrowdStrike BlogFix: The source explicitly mentions the solution implemented: creating a Skill document. The author states "I decided to see if I could get this working with a Skill" and describes the process: "Clone Starlette from GitHub...Build a skill markdown document for this release which includes code examples of every feature." They then used the "Copy to your skills" button to add this skill to their Claude chat, enabling Claude to generate correct Starlette 1.0 code in subsequent conversations.
Simon Willison's WeblogAnthropic has refused to let the U.S. Department of Defense use its AI technology for mass surveillance (monitoring large groups of people without individual suspicion), but FBI Director Kash Patel revealed that authorities can already conduct large-scale surveillance of Americans by purchasing data directly from private companies, bypassing the need for AI firms' cooperation.
OpenClaw, an open-source AI assistant project, has become extremely popular and is enabling developers to build and run AI agents locally on personal computers rather than relying on expensive cloud services from major AI companies. This rapid growth has sparked concern that advanced AI models are becoming commodities, with the same capabilities now available cheaply through open-source alternatives instead of only through expensive proprietary services from companies like OpenAI and Anthropic.
Agentic AI (AI systems that can independently take actions) is expected to handle 15-25% of e-commerce by 2030, but this growth creates security risks for retailers. Threat actors may exploit AI agents to commit fraud such as gift card theft and returns fraud, with estimates suggesting one in four data breaches by 2028 could involve AI agent exploitation. Google has introduced the Universal Commerce Protocol (UCP), an open standard designed to enable secure payments between AI agents and retail systems, though the article emphasizes that defending against AI-enabled fraud remains a critical challenge for organizations.