All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Google's Gemini AI can now automate tasks like booking Ubers or ordering food through DoorDash on certain Pixel 10 and Samsung Galaxy S26 phones. When you give Gemini a command like 'Get me an Uber to the Palace of Fine Arts,' it launches the app in a virtual window, completes the steps automatically, and lets you watch, pause, or take control if needed, though you must submit the final order yourself.
Google announced new Gemini features for Android phones that can automate multi-step tasks like ordering food or rides, along with improvements to scam detection and search capabilities. The automation feature is currently in beta and limited to certain apps and devices in the U.S. and Korea. To prevent problems, Google added protections so automations require explicit user commands, can be monitored and stopped in real time, and run in a secure virtual environment (an isolated space on your phone) that can only access limited apps.
OpenAI is rolling out ads to free and paid users of ChatGPT and says the process will be gradual and iterative. The company's COO emphasized that maintaining user privacy and trust is essential, and that well-designed ads can improve the user experience rather than detract from it.
Anthropic released a new Claude Code feature called "Remote Control" that lets you start a session on your computer and then control it remotely using Claude on web, iOS, and desktop apps by sending prompts to that session. The feature currently has several bugs, including permission approval issues, API errors, and problems with session termination, though the author expects these to be fixed soon.
Researchers discovered three security vulnerabilities in Anthropic's Claude Code (an AI-powered coding assistant) that could allow attackers to run arbitrary commands on a developer's computer and steal API keys (authentication credentials) simply by tricking users into opening malicious project folders. The vulnerabilities exploited configuration files and automation systems to bypass safety prompts and execute malicious code without user consent.
Peter Steinberger, creator of OpenClaw (an AI agent that works through WhatsApp), shares advice for developers building with AI: focus on exploration and experimentation rather than having a perfect plan from the start. He emphasizes that working with AI is a learnable skill, like learning guitar, and recommends approaching it playfully and iteratively rather than expecting immediate expertise.
According to IBM X-Force data from 2025, more than half of the 400,000 tracked vulnerabilities (56%) could be exploited without requiring authentication (the process of verifying who you are). This means attackers can exploit these security flaws without needing to log in or have legitimate access to a system.
Fickling (a Python library for analyzing pickle files, a Python serialization format) has a safety bypass where dangerous operations like network connections and file access are falsely marked as safe when certain opcodes (REDUCE and BUILD, which are pickle instructions) appear in sequence. Attackers can add a simple BUILD opcode to any malicious pickle to evade all five of fickling's safety detection methods.
Anthropic executives have suggested in recent interviews that Claude (their AI model) might be alive or conscious in some way, though the company denies Claude is alive like biological organisms. The company avoids directly stating whether Claude is conscious, using the term "alive" as a loaded question while focusing on model welfare research.
Atlassian has released a new feature called 'agents in Jira' that lets teams assign work to AI agents (programs that can perform tasks automatically) from the same project management dashboard used for human workers. The update tracks agent progress, sets deadlines, and allows companies to compare how AI agents perform against human employees on the same projects, potentially helping enterprises decide where AI automation is most valuable.
Stock prices for major cybersecurity companies have dropped significantly because of concerns that AI tools, specifically Claude's new vulnerability scanner (a tool that automatically finds security flaws in software), are disrupting the cybersecurity business.
Security teams typically report many activity metrics (like blocked attacks and patched vulnerabilities), but experts argue that boards need different information: risk signals that show whether danger is increasing or decreasing and how fast the organization detects and contains problems. Effective board-level security reporting should focus on business impact (financial loss, regulatory exposure, operational disruption) rather than technical details, using measures like detection speed and containment time that non-technical decision-makers can understand.
Enclave is a secure JavaScript sandbox designed to safely run code from AI agents, but versions before 2.11.1 had a vulnerability that allowed attackers to escape the security boundaries and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own). This weakness is related to code injection (CWE-94, a type of bug where untrusted input is used to generate code that gets executed).
Between January and February 2026, a Russian-speaking hacker compromised over 600 Fortigate firewalls (network security devices that filter traffic) by first targeting ones with weak passwords, then using an AI tool based on Google Gemini to access other devices on the same networks. Security researchers at AWS found that the attacker's reconnaissance tools (software used to gather information about a system) were written in Go and Python and showed signs of AI-generated code, suggesting threat actors are increasingly using AI to automate and scale their attacks.
Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a CSRF vulnerability (cross-site request forgery, where an attacker tricks a logged-in user into unknowingly sending requests to a website). An attacker can create a malicious webpage that, when visited by someone authenticated to Parse Dashboard, forces their browser to send unwanted requests to the AI Agent API endpoint without their knowledge. This vulnerability is fixed in version 9.0.0-alpha.8 and later.
Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have a security flaw in the AI Agent API endpoint (a feature for managing Parse Server apps) where authorization checks are missing, allowing authenticated users to access other apps' data and read-only users to perform write and delete operations they shouldn't be allowed to do. Only dashboards with the agent feature enabled are vulnerable to this issue.
Parse Dashboard versions 7.3.0-alpha.42 through 9.0.0-alpha.7 have security vulnerabilities in the AI Agent API endpoint that allow unauthenticated attackers to read and write data from any connected database using the master key (a special admin credential that grants full access). The agent feature must be enabled to be vulnerable, so dashboards without it are safe.
Fix: All three vulnerabilities have been fixed in specific Claude Code versions: the first vulnerability was fixed in version 1.0.87 (September 2025), CVE-2025-59536 was fixed in version 1.0.111 (October 2025), and CVE-2026-21852 was fixed in version 2.0.65 (January 2026). Users should update to these versions or later.
The Hacker NewsAbout 12% of U.S. teenagers use AI chatbots for emotional support or advice, alongside more common uses like searching for information and getting homework help. Mental health professionals warn that general-purpose AI tools like ChatGPT are not designed for this purpose and can isolate users from real-world connections and relationships, potentially causing serious psychological harm.
Fix: Character.AI disabled chatbot access for users under 18 following lawsuits related to teen suicides. OpenAI sunset (discontinued) its GPT-4o model, which users had relied on for emotional support.
TechCrunchFix: Potentially unsafe modules have been added to a blocklist in https://github.com/trailofbits/fickling/commit/0c4558d950daf70e134090573450ddcedaf10400.
GitHub Advisory DatabaseA researcher demonstrated how easily AI systems can be manipulated by creating false information on a personal website, which major chatbots like Google's Gemini and ChatGPT then repeated as fact within 24 hours, showing that AI training data poisoning (deliberately adding fake information to the data used to teach AI models) is a serious problem because it's so simple to execute.
Fix: Update to version 2.11.1 or later. The issue has been fixed in version 2.11.1.
NVD/CVE DatabaseAs companies adopt generative and agentic AI (AI systems that can take actions autonomously), they need to update their GRC (Governance, Risk & Compliance, the framework for managing rules, risks, and regulatory requirements) programs to account for AI-related risks. According to a 2025 security report, about 1 in 80 requests from company devices to AI services poses a high risk of exposing sensitive data, yet only 24% of companies have implemented comprehensive AI-GRC policies.
Fix: The source text recommends several explicit approaches: (1) Foster broad organizational acceptance of risk management across the company by promoting cooperation so all employees understand they must work together; (2) Develop both strategic and tactical approaches to define different types of AI tools, assess their relative risks, and weigh their potential benefits; (3) Use tactical measures including Secure-by-Design approaches (building security into AI tools from the start), initiatives to detect shadow AI (unauthorized AI use), and risk-based AI inventory and classification to focus resources on highest-impact risks without creating burdensome processes; (4) Make risks of specific AI measures transparent to business leadership rather than simply approving or rejecting AI use.
CSO OnlineFix: According to AWS security experts, the best protection against such attacks is to use strong passwords and enable Multi-Factor Authentication (MFA, a security method requiring multiple verification steps to prove identity). The report notes that the attacker repeatedly failed when attempting to compromise patched or hardened systems (computers updated with security fixes and configured defensively), so he targeted easier victims instead.
CSO OnlineFix: Update to version 9.0.0-alpha.8 or later, which adds CSRF middleware (code that checks requests are legitimate) to the agent endpoint and embeds a CSRF token (a secret code) in the dashboard page. Alternatively, remove the `agent` configuration block from your dashboard configuration file as a temporary workaround.
NVD/CVE DatabaseFix: Update to version 9.0.0-alpha.8 or later, which adds authorization checks and restricts read-only users to a limited key with write permissions removed server-side (the server prevents writes even if requested). As a temporary workaround, remove the `agent` configuration block from your dashboard configuration file.
NVD/CVE DatabaseFix: Upgrade to version 9.0.0-alpha.8 or later, which adds authentication, CSRF validation (protection against forged requests), and per-app authorization middleware to the agent endpoint. Alternatively, remove or comment out the agent configuration block from your Parse Dashboard configuration file as a temporary workaround.
NVD/CVE Database