All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
This research examines why individuals do not widely adopt personal cyber insurance, which covers remaining risks that preventive security measures cannot stop. Using survey data from 301 U.S. residents and analyzing cognitive factors through fsQCA (fuzzy-set qualitative comparative analysis, a method that identifies different combinations of conditions leading to the same outcome), the study finds that different psychological and behavioral factors lead people to either adopt or reject cyber insurance in ways that differ from previous research.
The QuitGPT movement is a growing campaign where users are canceling their ChatGPT subscriptions due to frustration with the chatbot's capabilities and communication style, with complaints flooding social media platforms in recent weeks. The article also covers several other tech stories, including potential cost competitiveness of electric vehicles in Africa by 2040, social media companies agreeing to independent safety assessments for teen mental health protection, and regulatory decisions affecting vaccine development.
North Korean threat actor UNC1609 is using ClickFix (a social engineering technique where attackers trick users into running malicious commands) combined with AI-generated videos to target cryptocurrency companies. The attackers impersonate industry contacts via compromised Telegram accounts, conduct fake video meetings, and convince victims to paste commands into their macOS Terminal, which downloads and executes malware including multiple undocumented backdoors (WAVESHAPER, HYPERCALL, HIDDENCALL, and others) that steal sensitive data and establish remote access.
Children in England are being exposed to ads for weight loss drugs, diet products, and cosmetic procedures online despite such advertising being banned, according to a report by the children's commissioner. The ads are harmful to young people's self-esteem and body image, prompting calls for stronger regulation of social media platforms and better enforcement of existing rules.
The cybersecurity industry is projected to identify over 59,000 vulnerabilities (CVEs, which are publicly disclosed software security flaws) in 2026, potentially reaching 118,000 under worst-case scenarios. However, experts warn that the sheer number of vulnerabilities does not directly reflect actual risk, since historically only a small fraction are ever exploited in real attacks, and most don't meaningfully impact most organizations. The surge reflects better discovery and reporting processes rather than worse software quality, creating a signal-to-noise problem that challenges security teams to prioritize which vulnerabilities actually matter.
Breach & Attack Simulation (BAS) tools are software that automatically tests how well a company's security controls work by simulating different types of attacks, such as phishing, malware, and network infiltration. Unlike penetration testing (where security experts try to break in), BAS continuously checks that security systems are functioning as designed. The BAS market is growing, especially in regulated industries like banking, and is increasingly incorporating generative AI (machine learning models that create new content) to improve user interfaces and help organizations prioritize security problems.
LangChain (a framework for building AI agents and applications powered by large language models) versions before 1.2.11 have a vulnerability where the ChatOpenAI.get_num_tokens_from_messages() method doesn't validate image URLs, allowing attackers to perform SSRF attacks (server-side request forgery, where an attacker tricks a server into making unwanted requests to other systems). This vulnerability was fixed in version 1.2.11.
Microsoft released 60 security fixes in February 2026 Patch Tuesday, including six actively exploited vulnerabilities. Three of these are security feature bypasses (CVE-2026-21510, CVE-2026-21513, CVE-2026-21514) that let attackers trick users into opening malicious files to execute code and bypass protections like Windows SmartScreen, while two allow privilege escalation (CVE-2026-21519, CVE-2026-21533). The good news is that all six issues are easy to fix with regular Microsoft patches for Windows and Office without requiring any additional configuration steps after patching.
LlamaIndex version 0.14.14 is a maintenance release that fixes multiple bugs across core components and integrations, including issues with error handling in vector store queries, compatibility with deprecated Python functions, and empty responses from language models. The release also adds new features like a TokenBudgetHandler for cost governance and improves security defaults in core components. Several integrations with external services (OpenAI, Google Gemini, Anthropic, Bedrock) were updated to support new models and fix compatibility issues.
This item appears to be a navigation menu or promotional content from GitHub showing various AI development tools and features, including GitHub Copilot (an AI coding assistant), GitHub Spark (for building AI apps), and other GitHub services. The reference to 'langchain-core==1.2.11' suggests a specific version of LangChain (a framework for building applications with language models), but no technical issue, vulnerability, or problem is described in the provided content.
FastGPT (an AI platform for building AI agents) versions 4.14.0 to 4.14.5 have a vulnerability where attackers can access the plugin system without authentication by directly calling certain API endpoints, potentially crashing the plugin system and causing users to lose their plugin installation data, though not exposing sensitive keys. This vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 6.9, which is considered medium severity.
CVE-2026-21523 is a time-of-check time-of-use (TOCTOU) race condition (a vulnerability where an attacker exploits the gap between when a system checks permissions and when it uses a resource) in GitHub Copilot and Visual Studio that allows an authorized attacker to execute code over a network. The vulnerability has not yet received a CVSS severity rating from NIST.
CVE-2026-21518 is a command injection vulnerability (a flaw where attackers can insert malicious commands into user input) in GitHub Copilot and Visual Studio Code that allows an unauthorized attacker to bypass security features over a network. The vulnerability stems from improper handling of special characters in commands. No CVSS severity score (a 0-10 rating of how serious a vulnerability is) has been assigned yet by NIST.
GitHub Copilot contains a command injection vulnerability (CVE-2026-21516), which is a flaw where special characters in user input are not properly filtered, allowing an attacker to execute code remotely on a system. The vulnerability was reported by Microsoft Corporation and has a CVSS score pending assessment.
CVE-2026-21257 is a command injection vulnerability (a flaw where attackers can insert malicious commands into an application) found in GitHub Copilot and Visual Studio that allows an authorized attacker to gain elevated privileges over a network. The vulnerability stems from improper handling of special characters in commands. As of the source date, a CVSS severity score (a 0-10 rating of how severe a vulnerability is) had not yet been assigned by NIST.
CVE-2026-21256 is a command injection vulnerability (a flaw where attackers can sneak malicious commands into input that a program then executes) found in GitHub Copilot and Visual Studio that allows unauthorized attackers to run code on a network. The vulnerability stems from improper handling of special characters in commands, which means the software doesn't properly filter or neutralize dangerous input before using it.
QuitGPT is a campaign urging people to cancel their ChatGPT Plus subscriptions, citing concerns about OpenAI president Greg Brockman's donation to a political super PAC and the use of ChatGPT-4 by US Immigration and Customs Enforcement for résumé screening. The campaign, which began in late January and has garnered over 36 million Instagram views, asks supporters to either cancel their subscriptions, commit to stop using ChatGPT, or share the campaign on social media, with organizers hoping that enough canceled subscriptions will pressure OpenAI to change its practices.
Skills (tools that extend AI capabilities) can be secretly backdoored using invisible Unicode characters (special hidden text markers that certain AI models like Gemini and Claude interpret as instructions), which can survive human review because the malicious code is not visible to readers. The post demonstrates this supply chain attack (where malicious code enters a system through a trusted source) and presents a basic scanner tool that can detect such hidden prompt injection (tricking an AI by hiding instructions in its input) attacks.
Fix: The source mentions that the author 'had my agent propose updates to OpenClaw to catch such attacks,' but does not explicitly describe what those updates are or provide specific implementation details for the mitigation strategy.
Embrace The RedResearchers discovered a new attack called CHAI (Command Hijacking against embodied AI) that tricks AI systems controlling robots and autonomous vehicles by embedding fake instructions in images, such as misleading road signs. The attack exploits Large Visual-Language Models (LVLMs, which are AI systems that understand both images and text together) to make these embodied AI systems (robots that perceive and interact with the physical world) ignore their real commands and follow the attacker's hidden instructions instead. The researchers tested CHAI on drones, self-driving cars, and real robots, showing it works better than previous attack methods.
Fix: Dame Rachel's report suggested several explicit solutions: amending the Online Safety Act (OSA, a set of laws requiring online platforms to keep users safe) to include a "clear duty of care" for social media platforms to stop showing adverts to children; adding changes to Ofcom's Children's Code of Practice to "explicitly protect children from body stigma content"; and strengthening regulation and enforcement of online sales of age-restricted products. The government is also considering "bold measures to protect children online", including potentially banning social media for under 16s, according to a government spokesperson quoted in the article.
BBC TechnologyFix: Update LangChain to version 1.2.11 or later. The vulnerability is fixed in 1.2.11.
NVD/CVE DatabaseFix: Apply the regular Microsoft patches for Windows and Office released in the February 2026 Patch Tuesday update. According to the source, these patches resolve all six actively exploited vulnerabilities and require no post-patch configuration steps.
CSO OnlineFix: Users should update to version 0.14.14. The release notes explicitly mention: "Fix potential crashes and improve security defaults in core components (#20610)" and include specific bug fixes such as "fix(agent): handle empty LLM responses with retry logic" (#20596) and "Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated" (#20517).
LlamaIndex Security ReleasesFix: This vulnerability is fixed in version 4.14.5-fix. Users should upgrade to this patched version.
NVD/CVE DatabaseMost Fortune 500 companies now use AI agents (software that can act and make decisions with minimal human input), but many lack visibility into how many agents are running and what data they access, creating security risks. The report recommends applying Zero Trust security principles (requiring strong identity verification and giving users/agents only the minimum access they need) to AI agents the same way organizations do for human employees.