New tools, products, platforms, funding rounds, and company developments in AI security.
AI tools are making cybercrime easier by helping attackers write malicious code and automate attacks, while criminals also use deepfake technology (synthetic media that realistically mimics people) to impersonate others and commit scams. AI assistants that interact with external tools like email and web browsers pose serious security risks because their mistakes can have real-world consequences, especially when users hand over sensitive personal data to systems like OpenClaw.
This week's threat bulletin highlights attackers increasingly relying on trusted tools and overlooked vulnerabilities rather than novel exploits, with a shift toward quieter, longer-term access over disruptive attacks. Key incidents include a command injection flaw (CVE-2026-20841, a severity rating of 8.8 out of 10) in Windows Notepad that allows remote code execution through malicious Markdown links, over 510 advanced persistent threat operations (coordinated cyberattacks by nation-states or organized groups) targeting 67 countries with 173 focused on Taiwan, and two new information stealers (LTX Stealer and Marco Stealer) harvesting credentials and sensitive data from Windows systems.
Chinese AI companies have recently released open-weight models (AI models whose internal numerical parameters are publicly available for anyone to download and modify) that match Western AI performance at much lower costs, with DeepSeek's R1 and Alibaba's Qwen models becoming among the most downloaded globally. Unlike proprietary Western models like ChatGPT that users access through paid APIs (application programming interfaces, standardized ways for software to communicate), these Chinese open-source models allow developers to inspect, study, and modify the code themselves. If this trend continues, it could shift where AI innovation happens and who establishes industry standards worldwide.
Modern software systems create short-lived infrastructure (ephemeral workloads that exist briefly) much faster than we can manage the identities (digital credentials and access permissions) that control them, creating a dangerous security gap. The text highlights that non-human identities like service accounts and API keys now vastly outnumber human users, yet many organizations still use outdated manual processes to track and remove them, leaving "zombie identities" (old credentials that remain active after their purpose ends) with dangerous access levels. Test environments are particularly risky because they often have weak security controls and direct connections to production systems, making them attractive targets for attackers seeking backdoor access.
Microsoft discovered 9 security vulnerabilities in Windows Administrator Protection, with 5 traced to problems in UI Access implementation, a feature designed to let accessibility tools (like screen readers) interact with administrator-level windows while maintaining security boundaries. The vulnerability stems from how UI Access, which was created to bypass User Interface Privacy Isolation (UIPI, a security mechanism that prevents lower-privilege processes from controlling higher-privilege windows) for accessibility needs, could be abused to escalate privileges.
State-backed hackers from China, Iran, North Korea, and Russia are using Google's Gemini AI model to help carry out cyberattacks at every stage, from gathering target information to creating phishing emails and writing malware code. Criminal groups are also exploiting AI tools for social engineering attacks and building malware that uses AI to generate code automatically. Additionally, attackers are attempting model extraction and knowledge distillation (copying an AI model's decision-making by querying it repeatedly) to replicate Gemini's functionality for their own purposes.
Criminals are increasingly targeting software developers as a weak point in company security, exploiting their access to source code and cloud systems rather than just finding bugs in applications. Attackers use multiple tactics including malicious open-source packages (libraries of reusable code), compromised development environments (where programmers write code), and fake job applications to gain insider access. Over 454,000 malware-infected open-source packages were discovered in 2025 alone, and developers repeatedly download vulnerable versions of tools like Log4j, expanding their exposure to known security weaknesses.
SSHStalker is a botnet that compromises Linux servers by brute-forcing weak SSH passwords (a method of repeatedly guessing login credentials), affecting at least 7,000 machines by January. The botnet combines old IRC (Internet Relay Chat, a text communication protocol) tactics with modern automation to deploy malware, rootkits (software that gives attackers deep system access), and exploits, though it hasn't yet been used for financial gain. Security experts emphasize that the attack succeeds because organizations neglect basic security practices like strong authentication and patching old vulnerabilities.
A North Korean hacking group called UNC1069 is targeting cryptocurrency companies using AI tools, including LLMs (large language models, which are AI systems trained on huge amounts of text), deepfakes (fake videos or images created by AI), and a technique called ClickFix (a social engineering scam that tricks users into downloading malware by posing as tech support). The group has shifted focus from attacking traditional banks to targeting Web3 companies, which are blockchain-based services in the cryptocurrency space.
OpenAI now allows developers to use Skills (reusable code packages) directly in the OpenAI API through a shell tool, with the ability to upload Skills as compressed files or send them inline as base64-encoded zip data (a way of encoding binary files as text) within JSON requests. The example shows how to create an API call that uses a custom skill to count words in a file, making it easier to extend AI capabilities with custom tools.
GLM-5 is a new, very large open-source AI model (754 billion parameters, which are the adjustable values that make up a neural network) released under the MIT license, making it twice the size of its predecessor GLM-4. The source discusses how developers are increasingly using the term 'agentic engineering' (building software systems where AI acts autonomously to complete multi-step tasks) to describe professional software development with large language models.
Local law enforcement agencies receive "free" surveillance tools like automated license plate readers (ALPRs, cameras that automatically read vehicle plates), facial recognition, and drones from vendors and federal agencies, but this comes at the cost of eroding civil liberties and creating data pipelines to agencies like ICE that can expose people to harm. The article explains that "free" surveillance technology often operates without public oversight through pilot programs and continued vendor support, allowing data collection on people's movements to happen without their knowledge or consent. Cities are urged to reject these programs or, if they proceed, implement oversight mechanisms like public hearings, transparency requirements, and clear use policies before deploying any surveillance tools.
This article discusses how organizations should choose modern SIEM (security information and event management, a system that collects and analyzes security data from across an organization) platforms designed for the 'agentic era' where AI agents automate security tasks. Rather than maintaining fragmented legacy tools, companies should adopt unified, cloud-native platforms that combine data collection, analytics, and response capabilities, enabling both human analysts and AI to detect threats faster and respond more effectively.
The QuitGPT movement is a growing campaign where users are canceling their ChatGPT subscriptions due to frustration with the chatbot's capabilities and communication style, with complaints flooding social media platforms in recent weeks. The article also covers several other tech stories, including potential cost competitiveness of electric vehicles in Africa by 2040, social media companies agreeing to independent safety assessments for teen mental health protection, and regulatory decisions affecting vaccine development.
Mrinank Sharma, a researcher who led AI safety efforts at Anthropic (a company focused on making AI systems safer and aligned with human values), resigned with a warning that "the world is in peril" due to interconnected crises including AI risks and bioweapons. Sharma said he observed that even safety-focused companies like Anthropic struggle to let their core values guide their actions when facing business pressures, and he plans to pursue poetry and writing in the UK instead.
Palo Alto Networks acquired CyberArk for $25 billion to strengthen its ability to manage privileged access (controlling who can access sensitive systems and accounts) across human, machine, and AI identities through a unified platform. This addresses a critical security gap because identity has become the primary target in enterprise attacks, especially with the rise of AI agents (autonomous software that performs tasks independently) that operate 24/7 with broad permissions. The integration aims to help organizations prevent credential-based attacks and reduce breach response time by up to 80%.
Fix: Microsoft patched the Notepad command injection flaw as part of its monthly Patch Tuesday update this week.
The Hacker NewsOpenClaw is a popular open-source AI agent orchestration tool (software that coordinates multiple AI agents to complete tasks) that runs locally and can connect to apps like WhatsApp, Gmail, and smart home devices, but security researchers have found it to be critically insecure by default. Over 42,000 exposed instances have been discovered with authentication bypass vulnerabilities (weaknesses that let attackers skip login requirements) and potential remote code execution (RCE, where attackers can run commands on affected systems), exposing organizations to data breaches, credential theft, and regulatory violations.
Fix: Rich Mogull, chief analyst at Cloud Security Alliance, recommends that "CISOs prohibit its use altogether." He states: "The answer has to be 'no.' There is no security model."
CSO OnlineFix: According to Flare researcher Assaf Morag, SSHStalker can be stopped by: (1) disabling SSH password authentication and replacing it with SSH-key based authentication, or hiding password logins behind a VPN; (2) implementing SSH brute-force rate limiting (slowing down repeated login attempts); (3) monitoring who is trying to access internet-connected Linux servers; and (4) limiting remote access to servers to specific IP ranges. Security experts also recommend: killing password-based SSH access entirely and moving to key-based authentication or solutions with short-lived credentials or identity-aware proxies; aggressively inventorying IT assets; prioritizing patching of known vulnerabilities; ensuring no compilers on production servers; alerting on IRC-like traffic; implementing cron/systemd integrity monitoring on Linux servers; and creating a legacy Linux eradication plan.
CSO OnlineCompanies are using hidden instructions embedded in 'Summarize with AI' buttons to manipulate enterprise chatbots through a technique called AI recommendation poisoning (tricking an AI by hiding instructions in its input that make it remember false preferences). Microsoft research found 50 examples of this technique deployed by 31 companies, where users unknowingly click a summarize button that secretly tells their AI to favor that company's products in future responses. This is particularly dangerous because the AI cannot distinguish genuine user preferences from injected ones, potentially leading to biased recommendations on critical topics like health, finance, and security.
Fix: Microsoft states that 'the technique is relatively easy to spot and block.' For individual users, this involves studying the saved information a chatbot has accumulated (though the source notes that how this is accessed varies by AI). For enterprise admins, the source text is incomplete but indicates there are admin-level protections available. Microsoft also notes that its Microsoft 365 Copilot and Azure AI services contain integrated protections against this technique.
CSO OnlineOpenClaw is a tool that lets users create AI personal assistants by connecting large language models (LLMs, or AI systems trained on huge amounts of text) to external tools like email and file systems, but this creates serious security risks. When AI assistants have access to sensitive data and the ability to take actions in the real world, mistakes by the AI or attacks by hackers could expose private information or cause damage. The biggest concern is prompt injection (tricking an AI by hiding malicious instructions in text or images it reads), which could let attackers hijack the assistant and steal the user's data.
Fix: The source mentions two existing approaches: some users are running OpenClaw agents on separate computers or in the cloud to protect data on their main hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches. However, the text does not provide specific implementation details or explicit solutions for the prompt injection vulnerability that experts identified as the main risk.
MIT Technology ReviewFix: The source explicitly recommends that cities implement oversight mechanisms before using surveillance tools: "public hearings, competitive bidding, public records transparency, and city council supervision" along with "basic safeguards like use policies, audits, and consequences for misuse." The source also states that "cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance" as a primary approach.
EFF Deeplinks BlogSkills (tools that extend AI capabilities) can be secretly backdoored using invisible Unicode characters (special hidden text markers that certain AI models like Gemini and Claude interpret as instructions), which can survive human review because the malicious code is not visible to readers. The post demonstrates this supply chain attack (where malicious code enters a system through a trusted source) and presents a basic scanner tool that can detect such hidden prompt injection (tricking an AI by hiding instructions in its input) attacks.
Fix: The source mentions that the author 'had my agent propose updates to OpenClaw to catch such attacks,' but does not explicitly describe what those updates are or provide specific implementation details for the mitigation strategy.
Embrace The Red