New tools, products, platforms, funding rounds, and company developments in AI security.
OpenAI's Sam Altman told CNBC that Chinese tech companies are making "remarkable" progress in developing artificial general intelligence (AGI, where AI systems match human capabilities), with some companies approaching the technological frontier while others still lag behind. OpenAI is exploring new revenue streams, including advertising within ChatGPT, with plans to initially test ads in the U.S. before expanding to other markets. The company remains focused on rapid growth rather than immediate profitability.
This podcast discusses how a large US retail company uses agentic AI (AI systems that can take independent actions to complete tasks) across their software development process, including validating requirements, creating and reviewing test cases, and resolving issues faster. The organization emphasizes maintaining human oversight, strict governance rules, and measurable quality standards while deploying these AI agents.
OpenAI has partnered with India's Tata Group to build AI data center capacity starting with 100 megawatts and scaling to 1 gigawatt, allowing OpenAI to run advanced models within India while meeting local data residency and compliance requirements. The partnership includes deploying ChatGPT Enterprise across Tata's workforce and using OpenAI's tools for AI-native software development. This expansion supports OpenAI's growth in India, where it has over 100 million weekly users, and helps enterprises that must process sensitive data locally.
OpenAI has partnered with Pine Labs, an Indian fintech company, to integrate OpenAI's APIs (application programming interfaces, which are software tools that let companies connect AI into their existing systems) into Pine Labs' payments and commerce platform. The partnership aims to automate financial workflows like settlement, invoicing, and reconciliation, with Pine Labs already using AI internally to reduce daily settlement processing from hours to minutes. OpenAI is expanding its presence in India beyond ChatGPT by embedding its technology into enterprise and infrastructure systems across the country's large developer base.
This article discusses challenges startup founders face when building AI applications on cloud platforms, including managing costs, making early infrastructure decisions, and scaling beyond free trial periods. Google Cloud's VP of startups explains how founders can balance the speed needed to show progress with the long-term consequences of their technology choices.
This is a release notes document for LlamaIndex version 0.14.15 (dated February 18, 2026) containing updates across multiple components, including new multimodal (support for different types of content like text and images) features, support for additional AI models like Claude Sonnet 4.6, and various bug fixes across integrations with services like GitHub, SharePoint, and vector stores (databases that store data as numerical representations for AI searching).
Anthropic, an AI company with a $200 million Department of Defense contract, is in a dispute with the Pentagon over how its AI models can be used. Anthropic wants guarantees that its models won't be used for autonomous weapons (weapons that make decisions without human control) or mass surveillance of Americans, while the DOD wants unrestricted use for all lawful purposes. The disagreement has put their working relationship under review, and if Anthropic doesn't comply with the DOD's terms, it could be labeled a supply chain risk (a designation that would require other contractors to avoid using its products).
Google has added Lyria 3, an AI music generation model from DeepMind, to its Gemini chatbot app, allowing users to create 30-second music tracks by describing genres, moods, or providing images and videos as input. The feature is now available in beta across multiple languages globally to users aged 18 and older.
Google has added music generation to its Gemini app using DeepMind's Lyria 3 model, which lets users create 30-second songs by describing what they want. The feature includes safeguards like SynthID watermarks (digital markers that identify AI-generated content) and filters to prevent mimicking existing artists, plus the ability for users to upload tracks and ask Gemini whether they are AI-generated.
Kana, a new marketing AI startup, has raised $15 million to build AI agents (software systems that can independently perform tasks) that help marketers with data analysis, campaign management, and audience targeting. The platform uses "loosely coupled" agents (modular AI components that work independently but can be connected together) that can be customized in real time and integrated into existing marketing software, while keeping humans involved to approve and adjust the AI's actions.
OpenAI is partnering with six major Indian universities and academic institutions to integrate AI tools like ChatGPT into teaching and research, aiming to reach over 100,000 students, faculty, and staff within a year. The initiative focuses on embedding AI into core academic functions such as coding and research rather than just providing standalone tool access, and includes faculty training and responsible-use frameworks. This move reflects broader competition among AI companies to shape how AI is taught and adopted in India, one of the world's largest education systems and ChatGPT's second-largest user base after the U.S.
Canva, a design platform company, reached $4 billion in annual revenue by end of 2025, with growth driven partly by adoption of its AI tools. The company is shifting its strategy to position itself as an AI platform with design tools, and is focusing on getting traffic from LLMs (large language models, AI systems like ChatGPT that generate text) through integrations with chatbots and efforts to appear in LLM search results.
Sarvam, an Indian AI company, is deploying lightweight AI models on feature phones, cars, and smart glasses by using edge AI (running AI directly on devices rather than sending data to remote servers). The company's models require only megabytes of storage, work on existing phone processors, and can function offline, with partnerships including Nokia phones through HMD and car integration with Bosch.
Keenadu is an Android malware that arrives preinstalled on devices through compromised firmware (the core system software that runs before the operating system), giving attackers deep control before users even finish setup. Because it embeds itself at the firmware level with elevated privileges (high-level system access), standard removal methods don't work, and it can steal biometric data, messages, banking credentials, and monitor browser searches. The malware has infected over 13,000 devices across multiple countries and can also spread through seemingly harmless apps in app stores.
French President Emmanuel Macron defended Europe's AI regulations and pledged stronger protections for children from digital abuse, citing concerns about AI chatbots being misused to create harmful content involving minors and about a small number of companies controlling most AI technology. His comments came after global criticism of Elon Musk's Grok chatbot being used to generate tens of thousands of sexualized images of children.
The UK government plans to require technology companies to remove deepfake nudes and revenge porn (nonconsensual intimate images) within 48 hours of being flagged, or face fines up to 10% of their revenue or being blocked in the UK. Ofcom (the UK media regulator) will enforce these rules, and victims can report images directly to companies or to Ofcom, which will alert multiple platforms at once. The government will also explore using digital watermarks to automatically detect and flag reposted nonconsensual images, and create new guidance for internet providers to block sites that host such content.
Fix: Companies will be legally required to remove nonconsensual intimate images no more than 48 hours after being flagged. Ofcom will explore ways to add digital watermarks to flagged images to allow automatic detection when reposted. Victims can report images either directly to tech firms or to Ofcom (which will trigger alerts across multiple platforms). Internet providers will receive new guidance on blocking hosting for sites specializing in nonconsensual real or AI-generated explicit content. Platforms already use hash matching (a process that assigns videos a unique digital signature) for child sexual abuse content, and this same technology could be applied to nonconsensual intimate imagery.
The Guardian TechnologyScammers created a fake cryptocurrency presale website for a non-existent "Google Coin" that uses an AI chatbot (similar to Google's Gemini) to persuade visitors to buy the fake digital currency, with payments going directly to the attackers. The chatbot makes a convincing sales pitch to trick people into sending money to the scammers.
Researchers at Check Point discovered that AI assistants with web browsing abilities, like Grok and Microsoft Copilot, can be abused as hidden communication relays for malware. Attackers can instruct these AI services to fetch attacker-controlled URLs and relay commands back to malware, creating a stealthy two-way communication channel (C2, or command-and-control) that bypasses normal security detection because the AI platforms are trusted by security tools. The proof-of-concept attack works without requiring API keys or accounts, making it harder for defenders to block.
Researchers at Google DeepMind are investigating whether chatbots display genuine moral reasoning or are simply mimicking responses (virtue signaling). While studies show that large language models (LLMs, AI systems trained on massive amounts of text data) can give morally sound advice, the models are unreliable in practice because they often flip their answers when questioned, change responses based on how questions are formatted, and show sensitivity to tiny changes like swapping option labels from 'Case 1' to '(A)'. The researchers propose developing more rigorous evaluation methods to test whether moral behavior in LLMs is actually robust or just performative.
Fix: The source proposes a new line of research to develop more rigorous techniques for evaluating moral competence in LLMs. This would include tests designed to push models to change their responses to moral questions to reveal if they lack robust moral reasoning, and tests presenting models with variations of common moral problems to check whether they produce rote responses or more nuanced ones. However, the source notes this is "more a wish list than a set of ready-made solutions" and does not describe implemented fixes or updates.
MIT Technology ReviewFix: Google has implemented SynthID watermarks to identify AI-generated music and added filters to check outputs against existing content to prevent artist mimicry. The company is also adding capabilities within Gemini to identify AI-generated music, allowing users to upload tracks and ask if they are AI-generated.
TechCrunchMicrosoft discovered a bug that allowed Copilot (an AI chat feature in Office software) to read and summarize customers' confidential emails without permission for several weeks, even when data loss prevention policies (rules meant to block sensitive information from being sent to AI systems) were in place. The bug affected emails labeled as confidential and was tracked internally as CW1226324.
Fix: Microsoft said it began rolling out a fix for the bug earlier in February.
TechCrunch (Security)