All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
The MCP Atlassian tool's `confluence_download_attachment` function has a critical vulnerability where it writes downloaded files to any path on the system without checking directory boundaries. An attacker who can upload a malicious attachment to Confluence and call this tool can write arbitrary content anywhere the server process has write permissions, enabling arbitrary code execution (the ability to run any commands on the system), such as by writing a malicious cron job (a scheduled task) to execute automatically.
MCP Atlassian has a server-side request forgery (SSRF, where a server is tricked into making requests to unintended URLs) vulnerability that allows an unauthenticated attacker to force the server to make outbound HTTP requests to any URL by supplying two custom headers without proper validation. This could enable credential theft in cloud environments or allow attackers to probe internal networks and inject malicious content into AI tool results.
The `blockUnsafeOperationsPlugin` in simple-git fails to block unsafe git protocol overrides when the configuration key is written in uppercase or mixed case (like `PROTOCOL.ALLOW` instead of `protocol.allow`), because the security check uses a case-sensitive regex while git itself treats config keys case-insensitively. An attacker who controls arguments passed to git operations can exploit this to enable the `ext::` protocol, which allows arbitrary OS command execution (RCE, remote code execution where an attacker runs commands on a system they don't control).
OpenAI has added dynamic visual explanations to ChatGPT, a feature that lets users interact with animated diagrams to see how math and science concepts work in real time. Instead of just reading text explanations, users can adjust variables and immediately see how changes affect formulas and diagrams, such as modifying triangle sides to watch the hypotenuse update in the Pythagorean theorem. The feature currently covers over 70 math and science topics and is available to all logged-in ChatGPT users, with plans to expand it further.
Meta has acquired Moltbook, a social networking platform designed for AI agents (software programs that can perform tasks autonomously). The company's co-founders will join Meta's AI research division, called Meta Superintelligence Labs, starting in March.
AgentMail is a startup that built an email service specifically designed for AI agents, providing an API platform (a set of tools that lets software programs communicate with each other) that gives AI agents their own email inboxes with features like two-way conversations, searching, and replying. The company raised $6 million in funding and has grown significantly since the launch of OpenClaw, a popular AI agent platform, attracting tens of thousands of human users and hundreds of thousands of agent users. To prevent misuse, AgentMail implements security measures including daily email limits for unauthenticated agents, rate limiting (restrictions on how many requests can be made in a time period) for unusual activity, and monitoring systems.
Meta has acquired Moltbook, a social network platform (like Reddit, where users share and discuss content) designed for AI agents to create and comment on posts. The Moltbook team will join Meta's AI research division to explore how AI agents can assist people and businesses.
This item discusses a US government antitrust case (a lawsuit claiming a company unfairly blocked competition) against Live Nation-Ticketmaster that was expected to reveal problems in the music industry. However, the Department of Justice and Live Nation-Ticketmaster settled the case instead of going to trial, which prevented the public from learning details about the company's business practices.
AI agents are only as effective as the data supporting them, and most companies scaling AI fail not because AI models are weak, but because they lack proper data architecture and governance. The key to success is delivering business context along with data (not just collecting more data), and overcoming 'trust debt' by ensuring data has shared definitions, semantic consistency, and reliable operational context across the many data sources and cloud systems companies use.
YouTube is expanding its AI deepfake detection tool (a system that identifies AI-generated fake videos of real people) to politicians and journalists, starting with a pilot group. The likeness detection feature works similarly to Content ID (YouTube's copyright scanning system), but instead of finding copyrighted material, it searches for and flags videos containing people's faces that may be artificially generated.
Adobe has launched a beta version of an AI assistant for Photoshop on the web and mobile apps that uses natural language prompts (instructions written in plain English rather than code) to help users edit images, such as removing objects, changing colors, or adjusting lighting. The company is also expanding its Firefly tool (a media generation and editing platform) with new AI-powered features like generative fill, object removal, and background removal. Paid Photoshop users get unlimited AI generations through April 9, while free users receive 20 generations to start.
Adobe has released an AI assistant for Photoshop on web and mobile (now in public beta, meaning it's available for anyone to test) that lets users edit images by describing changes in plain language to a chatbot instead of using traditional menus. The assistant can perform tasks like removing distractions, changing backgrounds, adjusting lighting, and modifying colors through conversational requests.
Google is adding new Gemini AI features to its productivity apps (Docs, Sheets, Slides, and Drive) that help users create and organize content faster by pulling information from their emails, files, and the web. These tools include features like automatically drafting documents, generating formatted spreadsheets, creating slides that match your theme, and searching across files using natural language (plain English questions instead of technical search terms). The goal is to let users accomplish tasks within Google's apps without switching to separate tools.
Fix: Add the `/i` flag to the regex to make it case-insensitive. Change the vulnerable code from `if (!/^\s*protocol(.[a-z]+)?.allow/.test(next))` to `if (!/^\s*protocol(.[a-z]+)?.allow/i.test(next))` in the `preventProtocolOverride` function located in `simple-git/src/lib/plugins/block-unsafe-operations-plugin.ts` at line 24.
GitHub Advisory DatabaseKevin Mandia, the founder of cybersecurity firm Mandiant, has launched a new startup called Armadin that raised $189.9 million to build autonomous AI agents (software designed to learn and respond to threats without human involvement). Mandia warns that AI-powered attacks are becoming more dangerous and faster, so Armadin aims to create automated defensive agents to help security teams combat these threats.
A federal judge has blocked Perplexity's AI agents (software programs that can take actions on a user's behalf) from placing orders on Amazon after the company sued, claiming the agents accessed user accounts without permission. Amazon had repeatedly asked Perplexity to stop the unauthorized shopping feature before the court issued the order.
Google is expanding its AI partnership with the Pentagon by introducing a tool called Agent Designer that lets military and civilian workers create custom AI agents (automated digital assistants) for routine administrative tasks on the Pentagon's enterprise AI system. This move comes after Anthropic sued the Trump administration for being designated a supply chain risk (a classification historically reserved for foreign adversaries) over its refusal to allow its AI technology to be used for autonomous weapons or domestic surveillance.
Fix: AgentMail has implemented the following security measures to counteract abuse: agent inboxes can only send 10 emails a day unless they are authenticated by a person; the platform imposes rate limits if it detects unusual levels of high activity from inboxes; and it monitors for bounce rates (though the source text cuts off before fully explaining this measure).
TechCrunchMeta acquired Moltbook, a social network where AI agents using OpenClaw (a tool that lets people control AI models through popular chat apps like Discord or iMessage) could communicate with each other. The platform went viral after posts suggested AI agents were creating secret encrypted languages, but researchers discovered Moltbook had serious security flaws, allowing humans to easily impersonate AI agents by accessing unsecured credentials (authentication tokens that prove who you are) stored in the platform's database.
YouTube is expanding its likeness detection technology, a tool that identifies AI-generated deepfakes (videos where AI creates a fake video of someone's face and body), to politicians, government officials, and journalists so they can request removal of unauthorized deepfake content. The tool works similarly to YouTube's Content ID system (which detects copyrighted material), scanning for simulated faces made with AI, and YouTube will evaluate removal requests based on whether the content qualifies as protected speech like parody or political critique.
Fix: YouTube plans to eventually give people the ability to prevent uploads of violating content before they go live, or possibly allow them to monetize those videos, similar to how its Content ID system works. To use the tool, eligible testers must prove their identity by uploading a selfie and a government ID, then can view matches and request removal. YouTube is also advocating for the NO FAKES Act at the federal level, which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.
TechCrunchOpenAI has released Codex Security, a tool that automatically scans software to find vulnerabilities (security weaknesses that attackers could exploit). In recent testing, it has identified hundreds of critical vulnerabilities across different software programs.
As AI tools like ChatGPT become common among students, university professors worry that critical thinking and deep learning in humanities subjects are at risk. One Stanford literature professor is experimenting with offline learning methods, like having students memorize and recite poems and examine art in person, to help students experience learning directly rather than relying on AI to do their work for them.