All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
AI models like Claude Mythos can now discover software vulnerabilities in minutes instead of weeks, shrinking the time organizations have to patch (the exploit window) to nearly zero. Because traditional patching is no longer fast enough, security teams need to adopt an "assume-breach" model that focuses on detecting and containing attacks in real time using Network Detection and Response (NDR, automated tools that monitor network traffic for suspicious behavior) rather than relying on patching alone.
Fix: The source recommends implementing an assume-breach operational model with three requirements: (1) detect post-breach behavior before threats spread, (2) reconstruct the complete attack chain quickly, and (3) contain threats rapidly. Specifically, organizations should prioritize reducing mean-time-to-contain (MTTC, the time from detecting a breach to stopping it) by establishing real-time, comprehensive network visibility. The source states that "Network Detection and Response (NDR) platforms play a crucial role in identifying these subtle indicators of compromise" by continuously monitoring network traffic for unusual behavior such as unexpected admin shares, authentication protocol mismatches, and lateral movement attempts.
The Hacker NewsCVE-2026-40979 is a security flaw in Spring AI (a framework for building AI applications) where someone with access to a shared computing environment can find and view the ONNX model (a type of machine learning model file) that the application uses. This vulnerability affects Spring AI versions 1.0.0 through 1.0.5 and 1.1.0 through 1.1.4.
A path traversal vulnerability (a bug where an attacker manipulates file paths to access files they shouldn't) was found in the ErlichLiu claude-agent-sdk, affecting a file called app/api/agent-output/route.ts. An attacker can exploit this remotely by manipulating the outputFile parameter, and the vulnerability has already been publicly disclosed. The project uses continuous updates but has not yet responded to the security report.
Microsoft fixed a security flaw in Entra ID (Microsoft's identity management system) where the Agent ID Administrator role, meant for AI agents, could be abused to take over service principals (accounts that applications use to authenticate). An attacker with this role could become the owner of any service principal and add their own credentials, potentially gaining broad control over a tenant (organization's cloud environment) if the targeted service principal had elevated permissions.
Top researchers from major AI companies like Google DeepMind, Meta, and OpenAI are leaving to start their own AI startups, which are raising hundreds of millions of dollars in funding. These new companies can focus on research areas that large tech firms deprioritize, such as new AI architectures and interpretability (understanding how AI systems make decisions), giving them a competitive advantage in the rapidly growing AI market.
This is not an AI/LLM-related item. The content describes jury selection in a legal case between Elon Musk and Sam Altman over OpenAI disputes, focusing on prospective jurors' negative personal opinions about Musk. It does not discuss any AI technology, security vulnerabilities, or technical issues related to large language models or AI systems.
Researchers have created talkie, a 13 billion-parameter language model (a neural network with 13 billion adjustable values) trained entirely on English text from before 1931 to study how AI performs on historical knowledge and invention tasks. The base model uses only out-of-copyright data, but the chat version required fine-tuning (additional training to adjust behavior) with help from modern AI systems like Claude, which introduced some knowledge from after 1931 that the researchers are working to eliminate.
OpenAI and AWS have expanded their partnership to make OpenAI's models, including GPT-5.5, available through Amazon Bedrock (AWS's managed service for using AI models). This integration lets enterprises use OpenAI's capabilities within their existing AWS security systems, workflows, and infrastructure, with three new offerings: OpenAI models on AWS, Codex (a coding assistant used by over 4 million people weekly) on AWS, and Amazon Bedrock Managed Agents for building AI agents that can execute multi-step workflows.
Elon Musk is suing OpenAI CEO Sam Altman and president Greg Brockman, alleging they deceived him into funding the company by promising to keep it as a nonprofit focused on beneficial AI, then secretly restructured it into a for-profit operation. The trial could determine whether OpenAI can operate as a for-profit company and may result in removing current leadership or forcing the company back to nonprofit status. The case highlights a fundamental conflict over OpenAI's mission: whether it should prioritize open-source AI for public benefit or operate for financial gain to fund more advanced development.
A vulnerability (CVE-2026-7178) was found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems) through the storeUrl function in the Artifacts Endpoint. The flaw can be exploited remotely, and the attack code has been made public, though the project developers have not yet responded to the early notification.
A security flaw has been found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems). The vulnerability exists in the proxyHandler function and can be exploited remotely, with public exploits already available. The developers have been notified but have not yet responded.
Canonical, the company behind Ubuntu Linux (a popular operating system), plans to add AI features to its system over the next year. These features will work in two ways: some will improve existing system functions quietly in the background, while others will be designed specifically for users who want AI-powered tools and workflows. The features will include accessibility improvements like better speech-to-text conversion and other AI-powered capabilities.
QnABot on AWS (a conversational AI tool built with Amazon Lex and other AWS services) has a vulnerability where administrators can run arbitrary code (unintended commands) by exploiting improper use of the static-eval npm package through the Content Designer interface, potentially giving them access to sensitive backend resources like databases and environment variables that should be protected.
Microsoft and OpenAI had a contract clause stating that if AGI (artificial general intelligence, meaning AI systems that outperform humans at most economically valuable work) was achieved, Microsoft would lose its commercial rights to OpenAI's technology. On April 27, 2026, this clause effectively ended when Microsoft's license became non-exclusive and Microsoft stopped paying revenue shares to OpenAI, with payments continuing regardless of technological progress.
RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) pipelines in enterprise software allow AI agents to access company data like internal wikis and CRM records, but this creates serious security risks including data leaks, unauthorized access to personal information, and prompt injection attacks (tricking an AI by hiding instructions in its input). Recent real-world attacks have exploited RAG systems through unclicked emails, exposed database access keys, hidden malicious text in code repositories, and poisoned knowledge bases to steal data or spread false information.
Fix: Fixed in Spring AI version 1.0.6 and version 1.1.5.
NVD/CVE DatabaseAs AI agents become more common, security leaders (CISOs, Chief Information Security Officers) face new challenges because these non-human identities are harder to track and verify than human users, and traditional security signals no longer work. The source recommends treating identity as the foundation of security architecture, with advice including maintaining clean directories, creating complete inventories of non-human identities (AI agents and service accounts), enforcing least privilege access (giving users only the permissions they need), using phishing-resistant authentication methods beyond SMS, and assuming that credentials may be compromised.
Fix: The source recommends several specific steps: (1) 'Build a strong foundation before layering on complexity' by getting 'clean directories, enforced least privilege, and reliable offboarding processes' in place; (2) 'Design for the new class of identities' by starting 'from least privilege rather than from legacy'; (3) 'Get your non-human identity inventory in order' by building 'a full inventory of non-human identities and include who is responsible for each identity, and what each one is authorized to do'; (4) 'Treat MFA as a starting point, not a destination' by including 'phishing-resistant alternatives to SMS or push-based MFA' along with 'least privilege, micro-segmentation, and continuous monitoring'; and (5) 'Assume credentials may be compromised and architect accordingly.'
CSO OnlineCrowdStrike has expanded its ChatGPT Enterprise integration to provide deeper monitoring of how organizations use AI, including tracking user authentication, administrative changes, tool usage, and conversations. As AI becomes embedded in business operations across departments, security teams need visibility into not just who has access to ChatGPT Enterprise, but how the platform is actually being used and what data might be accessed. The expanded integration uses OpenAI's logging capabilities to detect suspicious activity like unusual login patterns and behavioral anomalies, shifting from just knowing the configuration of AI systems to actively monitoring their real-time usage.
Fix: Organizations can use CrowdStrike Falcon Shield's expanded ChatGPT Enterprise integration, which ingests and analyzes events from OpenAI's Compliance Logs Platform to provide continuous monitoring and detection. According to the source, this enables detection of suspicious authentication activity (malicious IP access, anonymized connections, unusual VPN sign-ins), behavioral anomalies (simultaneous logins from untrusted networks, unexpected browser or OS changes), and monitoring of administrative updates and GPT configuration changes. The integration correlates ChatGPT Enterprise activity with identity, device, and SaaS telemetry across the CrowdStrike Falcon platform to detect and respond to suspicious AI activity.
CrowdStrike BlogFix: Microsoft rolled out a patch on April 9, 2026 across all cloud environments. Following the fix, any attempt to assign ownership over non-agent service principals using the Agent ID Administrator role is now blocked and displays a "Forbidden" error message. Organizations are also advised to monitor sensitive role usage related to service principal ownership or credential changes, track service principal ownership changes, secure privileged service principals, and audit credential creation on service principals.
The Hacker NewsFix: The talkie team states they 'aspire to eventually move beyond this limitation' by using 'vintage base models themselves as judges to enable a fully bootstrapped era-appropriate post-training pipeline,' meaning they plan to use talkie's own historical knowledge rather than modern AI systems for future training adjustments. However, this is described as a future goal, not a solution currently implemented.
Simon Willison's WeblogOpenAI describes its safety approach for ChatGPT to prevent misuse for violence, threats, or harm. The system is trained to distinguish between harmful requests and legitimate questions about violence for educational or historical reasons, while using detection systems and expert guidance to identify concerning patterns across conversations and take action like revoking access when needed.
ConnectWise ScreenConnect has a path traversal vulnerability (a flaw that lets attackers access files outside their intended directory) that could allow attackers to run remote code or steal sensitive data from critical systems. This vulnerability is actively being exploited by real attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Shell has a protection mechanism failure vulnerability that lets attackers perform spoofing (impersonating someone or something else) over a network without authorization. This vulnerability is actively being exploited by real attackers, making it a serious security concern.
Fix: Apply mitigations per Microsoft vendor instructions, follow applicable BOD 22-01 guidance for cloud services (government cybersecurity directives), or discontinue use of the product if mitigations are unavailable. The due date for remediation is 2026-05-12.
CISA Known Exploited Vulnerabilities