All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Milvus, a vector database (a specialized storage system for AI data) used in generative AI applications, had a security flaw in versions before 2.5.27 and 2.6.10 where it exposed port 9091 by default, allowing attackers to bypass authentication (security checks that verify who you are) in two ways: through a predictable default token on a debug endpoint, and by accessing the full REST API (the interface applications use to communicate with the database) without any password or login required, potentially letting them steal or modify data.
Fix: Update to Milvus version 2.5.27 or 2.6.10, where this vulnerability is fixed.
NVD/CVE DatabaseResearchers discovered a heap buffer overflow (a type of memory corruption flaw where data overflows a temporary memory area) in libpng, a widely-used library for reading and editing PNG image files, that existed for 30 years. The vulnerability in the png_set_quantize function could cause crashes or potentially allow attackers to extract data or execute remote code (run commands on a victim's system), but exploitation requires careful preparation and the flaw is rarely triggered in practice. The flaw affects all libpng versions before 1.6.55.
Anthropic, a startup known for developing Claude (an AI assistant), appointed Chris Liddell, a former Microsoft CFO and Trump administration official, to its board of directors. This move may help improve Anthropic's relationship with the Trump administration, which previously criticized the company for its stance on AI regulation.
Cursor, a code editor designed for programming with AI, had a sandbox escape vulnerability in versions before 2.5 where a malicious agent (an attacker using prompt injection, which is tricking an AI by hiding instructions in its input) could write to unprotected .git configuration files, including git hooks (scripts that run automatically when Git performs certain actions). This could lead to RCE (remote code execution, where an attacker runs commands on a system they don't control) when those hooks were triggered, with no user action needed.
xAI, an AI company founded by Elon Musk, is experiencing significant staff departures, with multiple cofounders (including Yuhuai Wu and Jimmy Ba) announcing they are leaving the company. The departures have reduced the company's original 12 cofounders to only 6 remaining, and several other employees have also announced their exits, with some starting their own AI companies.
New AI tools are becoming more powerful, causing investors to worry that AI might eliminate many white-collar jobs (office-based positions requiring advanced skills) or reduce company profits across industries like law, finance, and logistics. However, the article notes that expert opinions are divided about how serious this threat actually is, with some evidence suggesting investor fears may be overstated.
As organizations deploy multiple AI agents (independent AI programs) that work together autonomously, the security risks increase because there are more entry points for attackers to exploit. The complexity of securing these interconnected systems grows along with the number of agents involved.
Ring's Super Bowl advertisement showcases a heartwarming story about dogs reuniting with families, but critics worry it represents a concerning vision of pervasive surveillance (constant monitoring through connected devices) that could eliminate privacy. The ad illustrates how Ring's expanding network of cameras and connected devices could eventually create a society where surveillance is everywhere and inescapable.
OpenAI is shutting down a version of its chatbot called GPT-4o (a large language model, which is AI software trained on massive amounts of text data to generate human-like responses) that became popular for its realistic and personable conversational style. Users who formed emotional attachments to the chatbot, treating it as a companion, are upset about losing access to it.
Google detected and blocked over 100,000 coordinated prompts attempting model extraction (a machine-learning process where attackers create a smaller AI model by copying the essential traits of a larger one) against its Gemini AI model to steal its reasoning capabilities. The attackers specifically targeted Gemini's multilingual reasoning processes across diverse tasks, representing what Google calls intellectual property theft, though the company acknowledged that some researchers may have legitimate reasons for obtaining such samples.
Anthropic, the company behind Claude (an AI chatbot similar to ChatGPT), raised $30 billion in funding, doubling its value to $380 billion. The massive funding reflects investor confidence in AI but also highlights concerns about these companies' extremely high costs for computing power and talent, with both Anthropic and rival OpenAI spending cash at rates that currently outpace their revenue.
SIEM (security information and event management, a system that collects and analyzes security logs to detect threats) platforms are evolving to include AI, machine learning, and integrated tools like XDR (extended detection and response, which finds threats across endpoints and cloud systems) and SOAR (security orchestration, automation, and response, which automates how security teams respond to incidents). This convergence allows organizations to automatically detect and stop threats in real-time without manual intervention, with vendors selling these combined solutions together at rapidly increasing rates.
A reflected XSS vulnerability (a type of attack where malicious code is injected into a website and executed in a user's browser) was found in the AI Playground's OAuth callback handler (the code that processes login responses). The vulnerability allowed attackers to craft malicious links that, when clicked, could steal a user's chat history and access connected MCP servers (external services integrated with the AI system) on the victim's behalf.
Ransomware attacks now frequently target identity systems like Active Directory (the software that manages user accounts and permissions in organizations), compromising them to lock legitimate users out of their systems and block recovery efforts. Identity recovery, the process of restoring secure access to these systems after an attack, has become essential to cyber resilience (an organization's ability to recover quickly from security incidents). Security leaders and boards now treat identity recovery as a core part of enterprise risk management, with cyber insurance companies and regulators requiring evidence of tested recovery plans.
FastGPT is an AI Agent building platform (software for creating AI systems that perform tasks) that has a security vulnerability in components like web page acquisition nodes and HTTP nodes (parts that fetch data from servers). The vulnerability allows potential security risks when these nodes make data requests from the server, but it has been addressed by adding stricter internal network address detection (checks to prevent unauthorized access to internal systems).
Fix: The vulnerability is fixed in libpng version 1.6.55.
CSO OnlineWiz created a benchmark suite of 257 real-world cybersecurity challenges across five areas (zero-day discovery, CVE detection, API security, web security, and cloud security) to test which AI agents perform best at cybersecurity tasks. The benchmark runs tests in isolated Docker containers (sandboxed environments that prevent interference with the main system) and scores agents based on their ability to detect vulnerabilities and security issues, with Claude Code performing best overall.
Fix: Fixed in version 2.5.
NVD/CVE DatabaseMeta planned to add facial recognition (technology that identifies people by analyzing their faces) to its smart glasses through a feature called "Name Tag," according to an internal document. The company deliberately timed this launch for a period when privacy advocacy groups would be distracted by other issues, reducing expected criticism of the privacy-sensitive feature.
Fix: Google said organizations providing AI models as services should monitor API access patterns for signs of systematic extraction. According to CISO Ross Filipek quoted in the report, organizations should implement response filtering and output controls, which can prevent attackers from determining model behavior in the event of a breach, and should enforce strict governance over AI systems with close monitoring of data flows.
CSO OnlineData poisoning (corrupting training data to make AI systems behave incorrectly) has become much easier and more accessible than previously thought, requiring only about 250 poisoned documents or images instead of thousands to distort a large language model (an AI trained on massive amounts of text). Adversaries ranging from activists to criminals can now inject harmful data into public sources that feed AI training pipelines, and the resulting damage persists even after clean data is added later, making this a major security threat for any organization using public data to train or update AI systems.
Fix: One of the most reliable protections is establishing a clean, validated version of the model before deployment, which acts as a 'gold' version that teams can use as a baseline for anomaly checks and quickly restore to if the model starts producing unexpected outputs or shows signs of drift.
CSO OnlineKey management (the process of creating, storing, rotating, and retiring cryptographic keys throughout their lifetime) is often overlooked in organizations despite being critical to security, and this gap becomes even more dangerous as post-quantum cryptography (encryption designed to resist quantum computers) and AI systems become more widespread. The real challenge of post-quantum readiness is not choosing the right algorithm, but building operational ability to safely rotate and manage keys across systems without downtime. AI systems introduce additional risks because keys protect not just data access but also AI behavior and decisions, requiring tighter key controls and more frequent rotation than traditional applications need.
Fix: Agents-sdk users should upgrade to agents@0.3.10. Developers using configureOAuthCallback with custom error handling should ensure all user-controlled input is escaped (converted to safe text that won't be interpreted as code) before interpolation (inserting it into the HTML). A patch is available at PR https://github.com/cloudflare/agents/pull/841.
NVD/CVE DatabaseBeyondTrust Remote Support and Privileged Remote Access products contain an OS command injection vulnerability (a flaw that lets attackers run unauthorized system commands), which allows unauthenticated attackers to execute commands without needing login credentials or user action, potentially leading to system compromise and data theft. This vulnerability is currently being exploited by attackers in the wild. The vulnerability affects both on-premises and cloud versions of these products.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Check BeyondTrust's security advisories at https://www.beyondtrust.com/trust-center/security-advisories/bt26-02 for specific patching or mitigation steps. The vendor's guidelines should be used to assess exposure and check for signs of compromise on all internet-accessible BeyondTrust products.
CISA Known Exploited VulnerabilitiesFix: The source recommends implementing these specific capabilities: (1) immutable backups and automated recovery for identity systems such as Active Directory; (2) zero-trust architecture (applying least-privilege access and continuous authentication to limit attack spread); (3) automated orchestration to reduce manual steps in recovery workflows; (4) regulatory readiness with audit-ready reporting and compliance validation; (5) AI-ready protection by securing data environments and enabling fast rollback of damaging actions; and (6) backup platform isolation by treating the backup environment as a separate security domain that can serve as a minimum viable recovery environment when needed.
CSO OnlineFix: This vulnerability is fixed in version 4.14.7. Update FastGPT to version 4.14.7 or later.
NVD/CVE Database