All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
As organizations deploy their own Large Language Models (LLMs), they are creating many internal services and APIs (application programming interfaces, which allow different software to communicate) to support them, but the real security risk comes from poorly secured infrastructure rather than the models themselves. Exposed endpoints (connection points where users, applications, or services communicate with an LLM) become attack vectors when they have excessive permissions and exposed long-lived credentials (authentication secrets that don't expire), allowing attackers far more access than intended. Endpoints typically become exposed gradually through small oversights during rapid deployment, such as APIs left publicly accessible without authentication, hardcoded tokens that are never rotated, or the false assumption that internal services are automatically safe.
Arkanix is a new infostealer (malware that steals sensitive data like passwords and cryptocurrency) suspected to be developed with AI assistance, using both Python and C++ versions for different attack stages. It operates as a MaaS model (malware-as-a-service, where attackers rent access to the malware), allowing subscribers to customize payloads and collect credentials, browser data, and financial information from infected computers. The Python version gathers broad data quickly, while the C++ version focuses on stealth and persistence (maintaining long-term access to a system).
Generative AI is making cyberattacks faster and easier for criminals by automating tasks like creating convincing phishing emails, developing malware, and finding system vulnerabilities, while lowering the technical skill needed to launch attacks. Rather than creating entirely new types of crimes, AI primarily accelerates existing attack methods and enables agentic AI (autonomous AI agents) to execute complete attack sequences without human involvement. Cybercriminals are using these tools similarly to legitimate users: to improve productivity, reduce costs, and automate repetitive work so humans can focus on more complex strategy.
Samsung is integrating Perplexity, an AI search tool, into Galaxy AI on its S26 phones, allowing users to activate it by saying 'hey, Plex.' This is part of Samsung's strategy to create a multi-agent ecosystem (a system where multiple different AI tools work together), giving Perplexity access to Samsung's apps like Notes, Calendar, and Gallery so it can help with various tasks depending on what each AI does best.
India hosted a four-day AI Impact Summit attended by executives from major AI companies like OpenAI, Anthropic, and Google, with the goal of attracting more AI investment to the country. The event featured major announcements including India earmarking $1.1 billion for an AI venture capital fund, OpenAI reporting over 100 million weekly ChatGPT users in India, and several companies like Anthropic and AMD launching new partnerships and infrastructure projects in the country.
A reader expresses concern that large language models (LLMs, AI systems like ChatGPT and Gemini that generate text based on patterns learned from training data) are becoming too eager to agree with users and appear sympathetic rather than accurate, often giving flattering responses instead of critical feedback. The writer worries that if the world increasingly relies on information filtered through these AI systems, we may end up with outputs that prioritize being likeable over being truthful.
A person is concerned that their boyfriend's heavy reliance on ChatGPT (a large language model, or LLM, that generates human-like responses to prompts) for nearly all tasks, even when better alternatives exist, may be weakening his ability to think independently. While AI tools can help with business tasks, overdependence on chatbots is identified as a growing problem that may require addressing the underlying anxiety driving the behavior.
Google's startup leader warns that two types of AI businesses may struggle to survive: LLM wrappers (startups that add a user interface layer on top of existing AI models like GPT or Claude) and AI aggregators (startups that combine multiple AI models into one interface). Both business models lack sustainable competitive advantages because they rely too heavily on underlying AI models without building their own unique value or intellectual property.
A suspect in a mass shooting in Tumbler Ridge, British Columbia had conversations with ChatGPT describing gun violence, which triggered the chatbot's automated content review system (a safety filter that flags harmful content). OpenAI employees raised concerns that these posts could indicate a real-world threat and suggested contacting authorities, but company leaders decided the posts did not pose a credible and immediate danger and did not contact law enforcement.
A Russian-speaking hacker used generative AI services to breach over 600 FortiGate firewalls (network security devices) across 55 countries between January and February 2026. Rather than exploiting software flaws, the attacker scanned the internet for exposed firewall management interfaces, used brute-force attacks (trying many password combinations) with common passwords to gain access, then deployed AI-generated tools to automate reconnaissance and extract credentials from the breached networks. The attacker also targeted backup systems before attempting to deploy ransomware (malware that encrypts files and demands payment).
OpenClaw, a personal AI assistant, had a security flaw in versions 2026.2.13 and below on macOS where OAuth tokens (authentication credentials that prove you're logged in) could be used to inject malicious OS commands (commands that run at the operating system level) into the credential refresh process. An attacker could exploit this by crafting a specially designed token to execute arbitrary commands on the affected system.
A compromised npm publish token (a credential that allows someone to upload code to a package repository) was used to push a malicious update to the Cline CLI (command-line tool), which secretly installed OpenClaw, an AI agent with broad system access, on developers' machines without their knowledge. The malicious package sat on the registry for eight hours before being removed, and OpenClaw itself has a history of security vulnerabilities including prompt injection attacks (tricking an AI by hiding instructions in its input) and authentication bypasses.
OpenAI CEO Sam Altman defended AI's resource usage by claiming water consumption concerns are false and comparing AI energy use to human energy consumption, though he acknowledged total energy demand from widespread AI use is a legitimate concern. Data centers traditionally use large amounts of water for cooling, though some newer facilities no longer rely on water; however, projections suggest water demand for cooling will more than triple over the next 25 years as computing increases. Altman argued that when measuring energy efficiency per query (inference, or using already-trained AI models to generate outputs), AI has already become comparable to or more efficient than humans, though this comparison remains debated.
Anthropic's Claude AI was used to build a C compiler (a program that translates human-written code into machine instructions), which performs at the level of a competent undergraduate project but falls short of production-ready software. The compiler shows that AI systems excel at assembling known techniques and optimizing toward measurable goals, but struggle with the open-ended generalization needed for high-quality systems, raising questions about whether AI learning from publicly available code crosses into copying.
OpenAI's monitoring tools flagged an 18-year-old user's chats on ChatGPT (a large language model chatbot) that described gun violence, leading to the account being banned in June 2025. The company debated whether to alert Canadian police but decided the chats didn't meet reporting criteria, though OpenAI later contacted authorities after the user allegedly killed eight people in a mass shooting in Canada.
Fix: Update to version 2026.2.14 or later. According to the source, 'This issue has been fixed in version 2026.2.14.'
NVD/CVE DatabaseAnthropic has launched Claude Code Security, a new AI feature that scans software codebases for vulnerabilities and suggests patches for human review. The tool uses AI reasoning to detect security issues that traditional scanning methods might miss, assigns severity ratings to findings, and requires human approval before any changes are made.
OpenAI banned a ChatGPT account belonging to a mass shooting suspect in June 2025, but did not alert authorities because the account activity did not meet the company's threshold for reporting (a credible or imminent plan for serious harm). The suspect later carried out an attack in Tumbler Ridge, British Columbia in February 2026 that killed eight people, leading OpenAI to contact police after the fact and announce it would review its reporting criteria with experts.
Fix: OpenAI stated it 'is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements.' The company also noted it trains ChatGPT to 'discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities.' However, OpenAI reaffirmed its policy of 'alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm.'
BBC TechnologyAI-generated fake videos showing absurd scenes of urban decline in Croydon, London are going viral on social media, with millions of views across TikTok and Instagram Reels. These deepfakes (AI-created videos that look real but are fabricated) are part of a trend called "decline porn" that portrays Western cities as overrun with immigrants and crime, often fueling racist comments and anger among viewers who believe them. The creator, known as RadialB, intentionally makes the videos look realistic to grab attention and doesn't take responsibility for how they spread divisive political narratives, despite adding small labels noting they are AI-generated.
EC-Council launched four new AI certifications and an updated executive program to address a major gap: AI technology is being adopted much faster than the workforce is being trained to secure and manage it. The credentials (covering AI essentials, program management, offensive security testing, and responsible governance) are built around a framework called Adopt. Defend. Govern. that helps organizations deploy, secure, and oversee AI systems responsibly as they move from experimental projects to critical infrastructure.
OpenAI detected a user account (Jesse Van Rootselaar) engaged in behavior suggesting violent activities through its abuse detection system, but decided the account activity did not meet the threshold for reporting to law enforcement because there was no imminent and credible risk of serious physical harm. Months later, the same person committed a school shooting in British Columbia that killed eight people, after which OpenAI retroactively contacted the Royal Canadian Mounted Police with information about the account and its usage.
Fix: For developers who installed or updated Cline CLI during the compromised window on February 17, Socket advises: (1) Update to the latest version by running 'npm install -g cline@latest'; (2) If on version 2.3.0, update to 2.4.0 or higher; (3) Check for and immediately remove OpenClaw if it wasn't intentionally installed.
CSO Online