All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Threat actors are spreading GlassWorm malware through Open VSX extensions (add-ons for the VS Code editor) by abusing dependency relationships, a feature that automatically installs other extensions when one is installed. Instead of hiding malware in every extension, attackers create legitimate-looking extensions that gain user trust, then update them to depend on separate extensions containing the malware loader, making the attack harder to detect.
Fix: As of March 13, Open VSX has removed the majority of the transitively malicious extensions. Socket researchers recommend treating extension dependencies with the same scrutiny typically applied to software packages, monitoring extension updates, auditing dependency relationships, and restricting installation to trusted publishers where possible.
CSO OnlineOpenAI is developing an "adult mode" for ChatGPT that will allow users to generate text conversations with adult themes, described as "smut" rather than pornography. The feature will initially support only text and will not generate images, voice, or video content. OpenAI claims to have reduced "serious mental health issues" in its AI model enough to safely relax safety restrictions (the guardrails that prevent the AI from producing certain types of content) for this feature.
This article discusses how the Chief Security Officer (CSO) and Chief Information Security Officer (CISO) roles have evolved from technical positions focused on perimeter defense (protecting network boundaries) into strategic leadership roles reporting to CEOs, where leaders must now govern emerging risks like shadow AI (unauthorized AI tools used without approval) and generative AI while also acting as business enablers rather than blockers. Modern CSOs are expected to balance security with business continuity, address regulatory compliance strategically, and help organizations achieve their goals rather than simply prevent risks.
Wing FTP Server has a vulnerability where error messages reveal sensitive information when users send an overly long value in the UID cookie (a small file that stores user identity data). This flaw is currently being actively exploited by attackers in real-world attacks.
OpenAI confirmed that ChatGPT ads are currently only available in the United States, despite privacy policy updates that mentioned ads leading some users to speculate about a global rollout. The company is taking a deliberate, phased approach to expand ads gradually and learn from real-world use before rolling out more widely. ChatGPT ads are personalized based on user queries, appear only to logged-in Free and Go plan users in the US, and are not shown to users under 18 or those who request to opt out.
Agentic engineering is the practice of developing software with the help of coding agents, which are AI tools that can write and execute code in a loop to achieve a goal. Rather than replacing human engineers, these agents handle code generation while humans focus on the higher-level work: defining problems clearly, choosing among different solutions, and verifying that the results are correct and robust. To get good results from coding agents, engineers need to provide them with proper tools, specify problems in sufficient detail, and deliberately update instructions based on what they learn from each iteration.
This talk covers how software developers are adopting AI coding agents, from simple question-asking with ChatGPT to agents writing entire programs. The speaker emphasizes that trusting AI output (like Claude Opus) requires pairing it with test-driven development (TDD, a practice where you write tests before the actual code) and manual testing, since automated tests alone don't guarantee the software will actually run correctly.
Major AI infrastructure projects like OpenAI's Stargate datacentre (a massive computing facility where AI systems run) are facing financial and timeline challenges, with OpenAI backing away from parts of a planned $500 billion expansion in Texas. The article suggests that massive investments in datacentres and AI chips represent a significant economic gamble, with the UK potentially at particular risk if this 'AI bubble' deflates.
Microsoft is planning to release Gaming Copilot, an AI assistant that helps players when they get stuck in games, on current-generation Xbox consoles later this year. The assistant, which responds to voice commands, has already been tested in beta versions on Xbox's mobile app, Windows 11, and Xbox Ally handhelds, and Microsoft plans to expand it to additional gaming services.
LibreChat, a ChatGPT alternative with extra features, has a vulnerability in versions before 0.8.3-rc1 where an authenticated attacker can crash the server by sending malformed requests to a specific endpoint. The bug occurs because the code tries to extract data from a request without checking if it exists first, causing an unhandled error (a TypeError, which is a type of programming mistake) that shuts down the entire Node.js server process.
LibreChat versions 0.8.2 to 0.8.2-rc3 have a security flaw in the MCP (Model Context Protocol, a system for connecting AI models to external services) OAuth callback endpoint that fails to verify the user's identity. An attacker can trick a victim into completing an authorization flow, which stores the victim's OAuth tokens (credentials that grant access to services) on the attacker's account, allowing the attacker to take over the victim's connected services like Atlassian or Outlook.
Nvidia is shifting focus toward CPUs (central processing units, the main general-purpose chips in computers) alongside its famous GPUs (graphics processing units) because agentic AI (AI systems that autonomously complete tasks by orchestrating multiple agents working together) requires significant general computing power to move data and coordinate workflows. The company is unveiling new CPU details at its GTC conference, with demand from major partners like Meta driving a predicted doubling of the CPU market from $27 billion in 2025 to $60 billion by 2030.
CairoSVG (an SVG image processing library) has a denial-of-service vulnerability where recursive `<use>` elements (SVG tags that reference other graphics elements) can be nested without limits, causing exponential CPU exhaustion. A tiny 1,411-byte SVG file with just 5 levels of nesting and 10 references each triggers 100,000 render calls, pinning CPU at 100% indefinitely.
Anthropic has made 1M context (the ability to process 1 million tokens, which are small units of text that AI models break language into) generally available for its Opus 4.6 and Sonnet 4.6 models at standard pricing, with no additional charge for using the full window. This differs from competitors like OpenAI and Gemini, which charge premium rates when token usage exceeds certain thresholds (200,000 tokens for Gemini 3.1 Pro and 272,000 for GPT-5.4).
This content consists of letters to an editor about family quizzes and avoiding AI chatbots. One letter mentions that submitting gibberish to chatbots can circumvent them and quickly connect users to human support staff.
OWASP, a nonprofit cybersecurity organization, has published a checklist to help companies secure their use of generative AI and LLMs (large language models, which are AI systems trained on massive amounts of text to understand and generate human language). The checklist covers six key areas: understanding competitive and adversarial risks, threat modeling (identifying how attackers might exploit AI systems), maintaining an inventory of AI tools and assets, and ensuring proper governance and security controls are in place.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesAI companies are hiring improv actors through data-labeling companies like Handshake to create training data that teaches AI models to recognize and generate human emotions and character voices. This represents a strategy by major AI labs to gather specialized training data (the information used to teach AI systems) from skilled performers rather than relying solely on existing text or video sources.
OpenClaw, an open-source AI agent, has critical security flaws that could let attackers trick it into leaking sensitive data through prompt injection (embedding malicious instructions in web content to manipulate the AI). The platform's weak default security settings and high system privileges create additional risks, including accidental data deletion, malicious code installation through skill repositories, and exploitation of known vulnerabilities that could compromise entire business systems.
Fix: To counter these risks, users and organizations are advised to: strengthen network controls, prevent exposure of OpenClaw's default management port to the internet, isolate the service in a container, avoid storing credentials in plaintext, download skills only from trusted channels, disable automatic updates for skills, and keep the agent up-to-date.
The Hacker NewsFix: Update LibreChat to version 0.8.3-rc1 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to LibreChat version 0.8.3-rc1, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Add recursion depth counter to the `use()` function in `cairosvg/defs.py` (line ~335) and cap it at approximately 10 levels. Additionally, implement a total element budget to prevent amplification attacks.
GitHub Advisory DatabaseServiceNow's CEO warns that AI agents (software programs that can perform tasks independently) automating work could push college graduate unemployment into the mid-30s within a few years, making it harder for entry-level workers to stand out. Multiple major tech companies are already using AI to cut jobs and reduce hiring costs, affecting both technical roles like coding and white-collar positions across industries.
The US Department of War designated Anthropic as a 'supply chain risk' (a classification that prevents a company from being used in government contracts) after the company refused to remove safety restrictions on its AI model Claude, specifically rejecting military demands to enable fully autonomous weapons and domestic mass surveillance. Anthropic is challenging this designation in court, and legal experts question whether the Department of War has the authority to impose such restrictions outside of actual contract disputes.