aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4464 items

Musk says basis of charitable giving at stake in OpenAI lawsuit

infonews
policy
Apr 28, 2026

Elon Musk is suing OpenAI and CEO Sam Altman, claiming they misused a charitable organization by converting it into a for-profit company without permission. Musk argues this violates the trust placed in OpenAI as a non-profit and undermines charitable giving overall, while OpenAI's lawyers contend Musk is motivated by jealousy after failing to control the company and is now trying to damage a competitor.

BBC Technology

Elon Musk takes the stand in high-profile trial against OpenAI

infonews
policy
Apr 28, 2026

Elon Musk is testifying in a lawsuit against OpenAI CEO Sam Altman and president Greg Brockman over disagreements about the company's structure and mission that occurred after all three co-founded OpenAI together. Musk, who had invested up to $38 million in OpenAI early on, later left the company and founded his own AI competitor called xAI, which is owned by his company SpaceX.

OpenAI brings its models to Amazon's cloud after ending exclusivity with Microsoft

infonews
industry
Apr 28, 2026

OpenAI has made its AI models available through Amazon Web Services (AWS, Amazon's cloud computing platform), ending its exclusive arrangement with Microsoft. This means AWS customers can now use OpenAI's models and Codex (a tool for writing code) through Amazon Bedrock, a service that provides access to various AI models, with general availability coming in the next few weeks.

Claude can now plug directly into Photoshop, Blender, and Ableton

infonews
industry
Apr 28, 2026

Anthropic has released connectors that let Claude (an AI chatbot) directly access and control popular creative software like Photoshop, Blender, and Ableton. These connectors allow Claude to retrieve data and perform actions within these applications, such as debugging scenes in Blender or batch-applying changes to objects, making it easier to use Claude for creative work.

The Mythos Moment: Enterprises Must Fight Agents with Agents

infonews
safetysecurity

Webinar Today: A Step-by-Step Approach to AI Governance

infonews
policysecurity

FinBot CTF Is Live: A Hands-On Companion to the OWASP GenAI Security Project

inforesearchIndustry
security

Musk and Altman go to court

infonews
policy
Apr 28, 2026

Elon Musk and OpenAI are involved in a legal trial over disputes about the early development of AI, including questions about who deserves credit and financial compensation for the technology's creation. The case is expected to make private communications from important figures in the AI industry public during the coming weeks.

OpenAI's revenue, growth estimates fall short as company races toward IPO: Report

infonews
industry
Apr 28, 2026

OpenAI has failed to meet its own revenue and user growth targets, raising concerns about whether the company can afford its massive spending on data centers (facilities that house computing equipment). Finance Chief Sarah Friar worried the company might not be able to fund future computing agreements if the revenue slowdown continues, prompting executives to look for ways to cut costs.

Critical Cursor bug could turn routine Git into RCE

criticalnews
security
Apr 28, 2026

A critical vulnerability in Cursor IDE (a code editor with AI capabilities) allowed attackers to execute malicious code on a developer's machine by embedding harmful Git hooks (automated scripts that run during repository operations) in a fake repository. When Cursor's AI agent autonomously performed routine Git operations like checking out code, it would unknowingly trigger and run the attacker's malicious scripts, giving the attacker control over the developer's computer.

The Race Is on to Keep AI Agents From Running Wild With Your Credit Cards

infonews
policysecurity

Meta's new AI model shows early promise, but investors want to see Zuckerberg's strategy

infonews
industry
Apr 28, 2026

Meta launched Muse Spark, a new closed-source AI model (a large language model that processes and generates text), marking a shift from its previous open-source Llama models toward a paid subscription approach similar to competitors like OpenAI and Google. While Muse Spark shows competitive performance in text and vision tasks, investors are waiting to see Meta's strategy for driving consumer adoption and generating revenue beyond just improving its advertising business.

The Download: Musk and Altman’s legal showdown, and AI’s profit problem

infonews
industrypolicy

Privacy-preserving for user-uploaded images and text in Vision-Language Models

inforesearchPeer-Reviewed
privacy

A Survey of Algorithm Debt in Machine and Deep Learning Systems: Definition, Smells, and Future Work

inforesearchPeer-Reviewed
research

Sevii Launches Cyber Swarm Defense to Make Agentic AI Security Costs Predictable

infonews
industrysecurity

Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE

criticalnews
security
Apr 28, 2026

LeRobot, Hugging Face's open-source robotics platform, has a critical unpatched vulnerability (CVE-2026-25874, CVSS score 9.3) that allows unauthenticated attackers to execute arbitrary code by sending malicious data through unencrypted network connections. The flaw stems from unsafe deserialization (a process of converting data back into code without properly checking if it's trustworthy) using pickle, an unsafe data format, which enables attackers to compromise the server, steal sensitive data, or impact connected robots.

Google and Pentagon reportedly agree on deal for ‘any lawful’ use of AI

infonews
policy
Apr 28, 2026

Google has reportedly signed a classified agreement allowing the US Department of Defense to use its AI models for 'any lawful government purpose,' despite employee concerns about potential harmful uses. This deal places Google alongside other AI companies like OpenAI and xAI that have made similar classified agreements with the government.

What Anthropic’s Mythos Means for the Future of Cybersecurity

infonews
securitysafety

Attack of the killer script kiddies

infonews
securityresearch
Previous12 / 224Next
The Verge (AI)
CNBC Technology
The Verge (AI)
Apr 28, 2026

Advanced AI systems called agents (autonomous systems that can plan and execute tasks without human help) are becoming a serious cybersecurity threat, as shown by Anthropic's decision not to publicly release Claude Mythos Preview, a model that can identify and exploit software vulnerabilities automatically. Traditional security tools and fragmented defenses are inadequate against these fast, evolving AI-driven attacks. A new security approach built on three pillars is needed: unified network visibility (ability to see all traffic across the entire system), platform context (understanding what's happening by connecting security data in one place instead of using separate tools), and agentic control (using autonomous AI systems to detect and respond to threats at machine speed).

Fix: The source proposes a new security framework with three critical pillars: (1) Network Visibility: create a unified network that provides complete visibility into attack lifecycles by capturing and inspecting traffic across all domains over time; (2) Platform Context: use a converged platform that correlates security and networking data in a single pane of glass (one unified view) rather than piecing together signals from discrete tools post-incident, enabling real-time context preservation; (3) Agentic Control: deploy autonomous defense systems that can continuously analyze activity and identify emerging patterns at machine speed to match the speed of AI-driven attacks.

SecurityWeek
Apr 28, 2026

This webinar discusses Shadow AI, the unsanctioned adoption of generative AI and agentic tools (AI systems that can take independent actions) by employees outside of IT oversight, which creates security and compliance risks for organizations. The session proposes a "Governance-as-Enabler" framework that balances innovation with control through transparent approval workflows, sandboxes (isolated testing environments), cross-functional oversight councils, and lifecycle management tailored to different AI types.

SecurityWeek
research
Apr 28, 2026

FinBot is an interactive training platform (CTF, or capture-the-flag competition) created by OWASP to help builders and defenders understand how agentic AI systems (AI agents that plan, act, and make decisions in complex workflows) can fail and be attacked. It simulates a financial services application where users encounter real security risks like prompt injection (tricking an AI by hiding instructions in its input), tool misuse, data theft, and privilege escalation (gaining unauthorized higher-level access), with connections to industry security frameworks like the OWASP Top 10 for Agentic Applications.

OWASP GenAI Security
The Verge (AI)
CNBC Technology

Fix: The flaw is patched in Cursor version 2.5. According to the source, 'Sandbox escape via writing .git configuration was possible in versions prior to 2.5,' meaning the vulnerability has been fixed in version 2.5 and later.

CSO Online
Apr 28, 2026

Agentic AI (AI systems that perform actions on behalf of humans) is growing in use, but it creates new security risks like agents being hijacked or tricked into unauthorized transactions. The FIDO Alliance (an industry group focused on authentication standards), along with Google and Mastercard, is launching working groups to develop security standards that will protect AI agent transactions using cryptographic tools (mathematical techniques that verify identity and prevent tampering) and authentication mechanisms that prevent phishing attacks.

Fix: Google is contributing the Agent Payments Protocol (AP2), which cryptographically verifies that a user intended for an agent-initiated transaction to happen. Mastercard is contributing the Verifiable Intent framework (codeveloped with Google), which is a secure mechanism for users to authorize and control agent actions. Together, these tools aim to provide cryptographic proof that transactions were authorized by the user while maintaining privacy through selective disclosure, so different parties in the payment ecosystem only see relevant information.

Wired (Security)
CNBC Technology
Apr 28, 2026

This newsletter covers multiple AI developments including a legal battle between Elon Musk and OpenAI's leadership over the company's for-profit status, the gap between AI hype and actual profitability, and the rise of weaponized deepfakes (AI-generated fake videos or images used maliciously) that are spreading misinformation and harming vulnerable groups. The content also reports on business moves like OpenAI ending its exclusive partnership with Microsoft and various regulatory actions worldwide.

MIT Technology Review
research
Apr 28, 2026

Vision-language models (AI systems that process both images and text together) can leak private information from user-uploaded content, such as identifying people in photos or extracting sensitive text. This research examines privacy risks when users submit images and text to these models. The paper proposes privacy-preserving methods to protect user data while still allowing these AI systems to function effectively.

Elsevier Security Journals
Apr 28, 2026

This survey paper examines algorithm debt in machine learning and deep learning systems, which refers to the long-term costs and problems that accumulate when developers use suboptimal algorithms or methods in AI projects. The paper defines what algorithm debt is, identifies warning signs called 'smells' that indicate its presence, and discusses future research directions. Understanding algorithm debt helps developers recognize when quick, temporary solutions in AI projects create technical problems that become harder and more expensive to fix later.

ACM Digital Library (TOPS, DTRAP, CSUR)
Apr 28, 2026

CISOs (chief information security officers) struggle with unpredictable costs when using agentic AI (autonomous AI agents that can make decisions and take actions) for cybersecurity defense, since they are charged per AI token (a unit of text similar to a word) used, and attack volumes can spike unexpectedly. Sevii launched Cyber Swarm Defense, a new mode that charges by protected asset (like laptops or cloud servers) at a fixed yearly rate instead of per token, making defense costs predictable regardless of how many attacks occur. The system also includes governance controls that let security teams automatically remediate low-risk assets while keeping critical ones for human review.

Fix: Sevii's Cyber Swarm Defense (CSD) mode charges by asset protected at a firm fixed price (for example, $50 per year per laptop, identity, or cloud asset) rather than by AI token usage. The platform automatically scales up defensive agentic AI agents as needed during multiple simultaneous attacks without increasing costs. Customers can also use Sevii's Myrmidon Defense Technology to set remediation service level objectives, allowing automatic remediation of lower-value assets while keeping critical assets for manual remediation by in-house security experts.

SecurityWeek

Fix: A fix is planned in version 0.6.0. The LeRobot team acknowledged the issue in January 2026 and noted that the vulnerable part of the codebase will need to be almost entirely refactored.

The Hacker News
The Verge (AI)
Apr 28, 2026

Anthropic announced Claude Mythos Preview, an AI model that can autonomously find and weaponize software vulnerabilities (weaknesses in code that attackers can exploit) without human expert help, though the company is limiting its release to avoid security risks. The announcement highlights how AI capabilities have advanced rapidly over recent years, raising concerns about how cybersecurity defenses can adapt to AI-powered vulnerability discovery.

Fix: The source recommends protecting systems in different ways based on their characteristics: unpatchable or hard-to-verify systems (like IoT appliances and industrial equipment) should be protected by wrapping them in restrictive, tightly controlled firewall layers rather than allowing them to freely connect to the internet. Distributed systems that are interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs.

Schneier on Security
Apr 28, 2026

At DARPA's Artificial Intelligence Cyber Challenge, AI-powered bug-finding systems (automated tools that scan code to detect flaws) successfully identified most artificially inserted vulnerabilities in 54 million lines of code, and notably discovered over a dozen real bugs that weren't part of the test. This demonstrates that AI security tools are becoming increasingly capable at finding both known and unknown vulnerabilities in software.

The Verge (AI)