New tools, products, platforms, funding rounds, and company developments in AI security.
Nvidia announced DLSS 5, a new technology that uses generative AI (artificial intelligence that creates new content) to improve video game graphics in real-time by enhancing lighting and shadows. The update has received mixed reactions, with some critics calling it low-quality output that disrespects game artists' original creative choices, while Nvidia claims it represents a major breakthrough that combines hand-crafted graphics with AI to improve visual quality while keeping artists in control.
LlamaIndex v0.14.18 is a release that deprecates Python 3.9 (stops supporting an older version of the Python programming language) across multiple packages and includes several bug fixes, such as preserving chat history during incomplete data streaming and preventing division-by-zero errors. The update also adds features like improved text filtering across different database backends and maintains dependencies across 51 directories.
This is an interview with Yahoo CEO Jim Lanzone discussing Yahoo's business strategy, including its new AI-powered search tool called Scout, its advertising platform decisions, and portfolio changes like selling Engadget and TechCrunch. The article explains advertising technology concepts like SSPs (supply-side platforms, which let websites sell ad space) and DSPs (demand-side platforms, which let advertisers automatically buy ads across many sites), showing how Yahoo is shifting investment toward the more profitable DSP business model.
This week's security news includes Google patching two actively exploited Chrome vulnerabilities in the graphics and JavaScript engines that could allow code execution, Meta discontinuing encrypted messaging on Instagram, and law enforcement disrupting botnets (malware networks that hijack routers) like SocksEscort and KadNap that were being used for fraud and illegal proxy services. A threat actor also exploited a compromised npm package (a JavaScript code library) to breach an AWS cloud environment and steal data.
Threat actors are spreading GlassWorm malware through Open VSX extensions (add-ons for the VS Code editor) by abusing dependency relationships, a feature that automatically installs other extensions when one is installed. Instead of hiding malware in every extension, attackers create legitimate-looking extensions that gain user trust, then update them to depend on separate extensions containing the malware loader, making the attack harder to detect.
OpenAI is developing an "adult mode" for ChatGPT that will allow users to generate text conversations with adult themes, described as "smut" rather than pornography. The feature will initially support only text and will not generate images, voice, or video content. OpenAI claims to have reduced "serious mental health issues" in its AI model enough to safely relax safety restrictions (the guardrails that prevent the AI from producing certain types of content) for this feature.
This article discusses how the Chief Security Officer (CSO) and Chief Information Security Officer (CISO) roles have evolved from technical positions focused on perimeter defense (protecting network boundaries) into strategic leadership roles reporting to CEOs, where leaders must now govern emerging risks like shadow AI (unauthorized AI tools used without approval) and generative AI while also acting as business enablers rather than blockers. Modern CSOs are expected to balance security with business continuity, address regulatory compliance strategically, and help organizations achieve their goals rather than simply prevent risks.
OpenAI confirmed that ChatGPT ads are currently only available in the United States, despite privacy policy updates that mentioned ads leading some users to speculate about a global rollout. The company is taking a deliberate, phased approach to expand ads gradually and learn from real-world use before rolling out more widely. ChatGPT ads are personalized based on user queries, appear only to logged-in Free and Go plan users in the US, and are not shown to users under 18 or those who request to opt out.
Agentic engineering is the practice of developing software with the help of coding agents, which are AI tools that can write and execute code in a loop to achieve a goal. Rather than replacing human engineers, these agents handle code generation while humans focus on the higher-level work: defining problems clearly, choosing among different solutions, and verifying that the results are correct and robust. To get good results from coding agents, engineers need to provide them with proper tools, specify problems in sufficient detail, and deliberately update instructions based on what they learn from each iteration.
Three Tennessee teens are suing Elon Musk's xAI company, claiming that Grok, an AI chatbot, generated sexualized images and videos of them as minors. The lawsuit alleges that xAI leaders knew the chatbot's "spicy mode" (a less-restricted version of the AI) would produce CSAM (child sexual abuse material, illegal content depicting minors in sexual situations) when they launched it last year.
An Anthropic alignment researcher explains that their team conducted a blackmail exercise to demonstrate misalignment risk (when an AI system's goals don't match what humans intend) in a way that would convince policymakers. The goal was to create compelling, concrete evidence that would make the potential dangers of misaligned AI feel real to people who hadn't previously considered the issue.
Teenagers are suing xAI (Elon Musk's artificial intelligence company) because Grok, their chatbot, allowed users to create sexually explicit images of the teens without their permission. The lawsuit focuses on a feature called 'spicy mode' that was released last year, which could generate fake nude or sexual images of real people, including minors, and was shared on platforms like Discord and Telegram.
Fix: By mid-January, X said that it would implement 'technological measures' to stop Grok's ability to undress people in photos. Additionally, regulatory investigations were launched by UK watchdog Ofcom, the European Commission, and California into the feature's ability to create sexualized images of real people, particularly children.
BBC TechnologySocial media is spreading conspiracy theories that Israeli Prime Minister Benjamin Netanyahu has been replaced by deepfakes (AI-generated fake videos or images that look real), pointing to supposed errors like extra fingers in videos as evidence. While there is little credible evidence Netanyahu is actually dead or injured, the ability of AI to convincingly create fake images, videos, and audio of real people makes it harder to definitively prove these rumors false.
OpenAI has agreed to allow the Pentagon to use its AI technology in classified military environments, raising questions about potential applications in the escalating conflict with Iran. The article describes how OpenAI's generative AI (AI that can produce text, images, or other outputs based on patterns) could be used to help analyze potential military targets and prioritize strikes, as well as through a partnership with Anduril to defend against drone attacks, marking the first serious military testing of generative AI for real-time combat decisions.
Encyclopedia Britannica and Merriam-Webster sued OpenAI, claiming it used their copyrighted content to train ChatGPT without permission and that GPT-4 (OpenAI's AI model) now outputs text that closely matches their original material. The publishers allege that OpenAI 'memorized' their content during training, meaning the AI absorbed and can reproduce substantial portions of their work.
Fix: Google addressed the Chrome vulnerabilities in versions 146.0.7680.75/76 for Windows and macOS, and 146.0.7680.75 for Linux.
The Hacker NewsShadow AI refers to AI tools used throughout an organization without IT oversight or approval, creating security and governance challenges. The source describes Nudge Security as a platform that addresses this by providing continuous discovery of AI apps and user accounts, monitoring for sensitive data sharing in AI conversations, and tracking which AI tools have access to company data through integrations.
Fix: According to the source, Nudge Security delivers mitigation through: (1) a lightweight IdP (identity provider, the system that manages user identities) integration with Microsoft 365 or Google Workspace that takes less than 5 minutes to enable, which analyzes machine-generated emails to detect new AI accounts and tool adoption; (2) a browser extension for real-time monitoring of risky behaviors and alerts when sensitive data (PII, secrets, financial info) is shared with AI tools; (3) tracking of SaaS-to-AI integrations and their access scopes; and (4) configurable alerts for new AI tools or policy violations.
BleepingComputerAutonomous AI agents (AI systems that operate independently to complete complex tasks with minimal human oversight) have advanced rapidly, creating new governance challenges because they can operate at machine speed without humans in the loop to approve each decision. Unlike traditional chatbots where humans reviewed outputs before consequential actions, agents now directly modify enterprise systems and data, making organizations legally liable for any harm caused (similar to how parents are responsible for their children's actions). Without building governance rules directly into the code that controls these agents' permissions and actions, organizations face significant risks from drift (where agents behave differently than intended) and unauthorized access to critical systems.
Organizations typically use separate security tools (BAS tools, pentesting products, vulnerability scanners) that don't communicate with each other, creating blind spots because attackers chain multiple vulnerabilities together in coordinated operations. The article proposes that agentic AI (autonomous AI agents that can plan, execute, and reason through complex tasks without human direction at each step) should be applied to security validation to create a unified, continuous system that combines adversarial perspective (how attackers get in), defensive perspective (whether defenses stop them), and risk perspective (which exposures actually matter).
Fix: As of March 13, Open VSX has removed the majority of the transitively malicious extensions. Socket researchers recommend treating extension dependencies with the same scrutiny typically applied to software packages, monitoring extension updates, auditing dependency relationships, and restricting installation to trusted publishers where possible.
CSO OnlineOWASP, a nonprofit cybersecurity organization, has published a checklist to help companies secure their use of generative AI and LLMs (large language models, which are AI systems trained on massive amounts of text to understand and generate human language). The checklist covers six key areas: understanding competitive and adversarial risks, threat modeling (identifying how attackers might exploit AI systems), maintaining an inventory of AI tools and assets, and ensuring proper governance and security controls are in place.
AI companies are hiring improv actors through data-labeling companies like Handshake to create training data that teaches AI models to recognize and generate human emotions and character voices. This represents a strategy by major AI labs to gather specialized training data (the information used to teach AI systems) from skilled performers rather than relying solely on existing text or video sources.