aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4452 items

Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia — but not Anthropic

infonews
policyindustry
May 1, 2026

The Pentagon has signed agreements with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection to use their AI tools in classified military settings, but excluded Anthropic after labeling it a supply-chain risk (a potential weak point in security). This expands earlier deals that allowed some companies like OpenAI and xAI to provide AI systems for authorized military use.

The Verge (AI)

Elon Musk had a bad week in court

infonews
policy
May 1, 2026

This article discusses a legal case where Elon Musk is suing OpenAI (an AI company), claiming they stole a nonprofit organization and that he was the main force behind their success. During his testimony in court, Musk had a difficult time, arguing with lawyers and changing his statements, with indications suggesting he is unlikely to win the case.

Christian content creators are outsourcing AI slop to gig workers on Fiverr

infonews
industry
May 1, 2026

Gig workers on platforms like Fiverr are increasingly using generative AI (artificial intelligence systems that create text, images, or video) to quickly produce cheap content for clients, particularly AI-generated Bible story animations shared on social media. This represents a shift from the platform's original purpose of connecting clients with skilled freelancers who developed their expertise over years.

Pentagon tech chief says Anthropic is still blacklisted, but Mythos is a separate issue

infonews
policysecurity

The Download: a new Christian phone network, and debugging LLMs

infonews
industrysafety

Careful Adoption of Agentic AI Services

infovulnerability
policysafety

Microsoft wants lawyers to trust its new AI agent in Word documents

infonews
industry
May 1, 2026

Microsoft is launching a new AI agent within Word that is designed specifically for legal teams to help with tasks like reviewing contracts and managing document edits. Unlike general AI models, the Legal Agent follows structured workflows (predetermined sets of steps) based on actual legal practices, handling specific repeatable tasks like reviewing contract clauses against a predefined playbook (a set of rules or guidelines).

Cisco Releases Open Source Tool for AI Model Provenance 

infonews
securityindustry

Human-centric failures: Why BEC continues to work despite MFA

infonews
security
May 1, 2026

Business email compromise (BEC, a scam where attackers trick employees into sending money by impersonating trusted contacts) continues to succeed even when organizations use MFA (multi-factor authentication, a security method requiring multiple forms of ID to access accounts) because attackers exploit human behavior and business processes rather than stealing credentials. Real attacks like the Toyota case (where an employee transferred $30 million based on a fake urgent email) and the Arup case (where deepfake technology impersonated a manager) show that the weakest point is often the human decision-maker approving payments, not the technical security controls.

Managing OT risk at scale: Why OT cyber decisions are leadership decisions

infonews
security
May 1, 2026

OT (operational technology, the systems that control physical industrial processes like power plants or factories) cyber risk requires a different management approach than IT security because OT systems have long lifecycles, limited patching windows, and third-party dependencies that create unique vulnerabilities. The article argues that managing OT risk at scale is fundamentally a leadership and governance challenge rather than a purely technical problem, requiring consistent decision-making across all sites and clear accountability structures.

Enterprise Spotlight: Transforming software development with AI

infonews
industry
May 1, 2026

AI is changing how software is developed by affecting coding practices, tools, developer roles, and the overall development process across all stages, from initial planning through maintenance. The article discusses how AI agents are being integrated throughout the software development life cycle (the complete process of creating and maintaining software, from concept to deployment).

Hugging Face, ClawHub Abused for Malware Distribution

highnews
security
May 1, 2026

Threat actors are abusing AI distribution platforms like Hugging Face and ClawHub to spread malware by uploading trojanized files (files containing hidden malicious code) that trick users into downloading them through social engineering. The attackers use indirect prompt injection (embedding hidden instructions in data that AI systems read and execute without the user knowing) to make AI agents automatically download and run malware on users' computers, with hundreds of malicious files identified across both platforms.

‘Trivial’ exploit can give attackers root access to Linux kernel

infonews
security
Apr 30, 2026

A serious vulnerability called Copy Fail (CVE-2026-31431) in the Linux kernel allows unprivileged users to gain root access (the highest permission level) through a simple exploit, affecting virtually all Linux systems since 2017. With root access, attackers can steal or delete data. Until Linux distributions release patches, the main defense is monitoring for unauthorized privilege escalation attempts.

CVE-2026-31431: Linux Kernel Incorrect Resource Transfer Between Spheres Vulnerability

infovulnerability
security
Apr 30, 2026
CVE-2026-31431🔥 Actively Exploited

Bank regulator sounds warning over cybersecurity threat posed by AI models

infonews
securitypolicy

Our evaluation of OpenAI's GPT-5.5 cyber capabilities

infonews
research
Apr 30, 2026

N/A -- The provided content is a metadata header and navigation element from a web page, not an actual article or analysis. It contains only a title, date, author attribution, topic tags, and sponsorship information with no substantive technical content about GPT-5.5, cyber capabilities, or any security findings to summarize.

CVE-2026-6543: IBM Langflow Desktop 1.0.0 through 1.8.4 Langflow allows an attacker to execute arbitrary commands with the privileges o

highvulnerability
security
Apr 30, 2026
CVE-2026-6543

IBM Langflow Desktop versions 1.0.0 through 1.8.4 contains a code injection vulnerability (CWE-94, a flaw where attackers can insert and execute their own code) that allows attackers to run arbitrary commands (any commands an attacker chooses) with the same permissions as the Langflow application. This could let attackers steal sensitive information like API keys and database passwords, modify files, or attack other systems on the network.

CVE-2026-6542: IBM Langflow OSS 1.0.0 through 1.8.4 could allow any user to supply a flow_id to read transaction logs and vertex build

mediumvulnerability
security
Apr 30, 2026
CVE-2026-6542

IBM Langflow OSS (open-source software) versions 1.0.0 through 1.8.4 has a vulnerability where any user can view and delete other users' data by supplying a flow_id (a reference number for a workflow). This happens because the system doesn't properly check who should be allowed to access certain information, allowing unauthorized access to transaction logs and build data.

CVE-2026-40687: In Exim before 4.99.2, when the SPA authentication driver is used with an adversarial SPA resource, there can be an out-

mediumvulnerability
security
Apr 30, 2026
CVE-2026-40687

CVE-2026-40687 is a vulnerability in Exim email software (before version 4.99.2) where the SPA authentication driver (a method for verifying user identity) can be exploited with a malicious SPA resource to cause an out-of-bounds write (writing data to memory locations outside the intended area), which crashes the email connection or exposes uninitialized heap memory data (unused memory that may contain sensitive information).

CVE-2026-3345: IBM Langflow Desktop <=1.8.4 Langflow could allow a remote attacker to traverse directories on the system. An attacker c

mediumvulnerability
security
Apr 30, 2026
CVE-2026-3345

IBM Langflow Desktop version 1.8.4 and earlier has a path traversal vulnerability (CWE-22, a flaw that lets attackers access files outside intended directories) that allows remote attackers to view arbitrary files on a system by sending specially crafted URLs containing "dot dot" sequences (/../), which trick the system into navigating to restricted folders.

Previous6 / 223Next
The Verge (AI)
The Verge (AI)
May 1, 2026

The Pentagon's chief technology officer stated that Anthropic remains classified as a supply chain risk (a designation meaning the company's technology threatens U.S. national security), but Anthropic's Mythos AI model, which has advanced capabilities for finding and fixing cyber vulnerabilities, is being treated as a separate urgent national security issue requiring the Department of Defense to strengthen its networks. The DOD has blacklisted Anthropic from working with defense contractors, though the agency is reportedly using Mythos internally and is open to negotiations about safeguards (called guardrails, or restrictions on how the AI can be used) if Anthropic agrees to terms similar to those negotiated with other AI companies.

CNBC Technology
May 1, 2026

Goodfire, a San Francisco startup, released Silico, a tool that uses mechanistic interpretability (a technique for understanding how AI models work by mapping their internal neurons and connections) to let researchers see inside AI models and adjust their parameters during training. The tool aims to give developers more control over AI behavior by exposing internal 'knobs and dials' so they can reduce unwanted outputs, making AI development more like traditional software engineering rather than trial-and-error.

Fix: The source describes Silico as the solution itself—it uses mechanistic interpretability to map neurons and pathways inside a model and lets developers tweak them to reduce unwanted behaviors or steer outputs. No additional mitigation steps or fixes beyond using this tool are mentioned in the text.

MIT Technology Review
May 1, 2026

CISA and international cybersecurity partners released guidance for organizations adopting agentic AI (AI systems that can take actions autonomously on behalf of users). The guidance identifies security challenges with these systems and provides steps for safely designing, deploying, and operating them while connecting AI risk management to existing cybersecurity practices.

CISA Cybersecurity Advisories
The Verge (AI)
May 1, 2026

Organizations often use AI models from online repositories like HuggingFace without tracking their changes, verifications, or vulnerabilities, which can lead to security risks if models are poisoned (containing hidden malicious code) or contain training biases. Cisco released the Model Provenance Kit, an open source Python-based tool that creates a unique 'fingerprint' for each model using metadata and other signals, allowing organizations to compare models and trace their origins to address these tracking and accountability problems.

Fix: The Model Provenance Kit from Cisco is available on GitHub. The tool has two modes: 'compare' mode enables users to compare two models to identify shared lineage, and 'scan' mode attempts to find the closest lineage for a given model by comparing its fingerprint against Cisco's database of fingerprints. Cisco's dataset of base model fingerprints is also available on Hugging Face.

SecurityWeek

Fix: The source explicitly recommends: (1) redesigning approval workflows so high-value transactions require multi-step verification including out-of-band calls (verification methods using a separate communication channel, like a phone call to confirm an email request); (2) simulating BEC scenarios in realistic exercises to identify gaps in response and decision-making; (3) embedding security awareness into daily routines using micro-learning and real incident reviews; (4) empowering teams to challenge unusual requests without fear of reprisal; (5) sharing instances of successful attacks with employees who distribute invoices and oversee financial decisions; and (6) explicitly defining what constitutes high-risk requests, such as first-time payments, changes to vendor banking details, sudden payment requests from executives, or requests that bypass standard procedures.

CSO Online
CSO Online
CSO Online
SecurityWeek

Fix: Apply kernel patches from your Linux distribution as soon as they are released, and reboot systems after patching. According to the source, 'As soon as patches are available for what's been dubbed the Copy Fail logic bug... As of midday Thursday, only Arch Linux had released a patch,' but other distributions are expected to release patches within days. For Debian, Ubuntu, and Debian-based systems, the exploitable code can be disabled via kernel commands before patches are available, though this option is not feasible in large environments according to the source.

CSO Online

The Linux Kernel has a vulnerability where system resources are incorrectly transferred between different security zones, potentially allowing an attacker to gain elevated privileges (privilege escalation, meaning they can perform actions normally restricted to administrators). This vulnerability is currently being exploited by attackers in the wild.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
Apr 30, 2026

Australia's financial regulator (APRA) warns that advanced AI models like Claude Mythos could give attackers powerful tools to find security flaws faster than banks can fix them, threatening the banking sector. The regulator found that banks treat AI as just another technology and lack proper processes to identify and patch vulnerabilities quickly enough to keep up with AI-assisted attacks. APRA calls for urgent overhauls to governance, vulnerability testing, and security assessment of AI platforms.

Fix: APRA identifies the following areas for improvement: (1) urgent need to more rapidly identify and remediate vulnerabilities through major process overhaul, (2) robust security testing across AI-generated code, software components, and libraries, and (3) deeper assessment of major AI platforms and services. The source also notes that regulators are requesting access to Claude Mythos itself so financial institutions can use it to defend against the cyberattacks it could enable.

CSO Online
Simon Willison's Weblog
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database