3 Sources
[1]
Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw
Researchers have already found a critical vulnerability in the new NLWeb protocol Microsoft made a big deal about just just a few months ago at Build. It's a protocol that's supposed to be "HTML for the Agentic Web," offering ChatGPT-like search to any website or app. Discovery of the embarrassing security flaw comes in the early stages of Microsoft deploying NLWeb with customers like Shopify, Snowlake, and TripAdvisor. The flaw allows any remote users to read sensitive files, including system configuration files and even OpenAI or Gemini API keys. What's worse is that it's a classic path traversal flaw, meaning it's as easy to exploit as visiting a malformed URL. Microsoft has patched the flaw, but it raises questions about how something as basic as this wasn't picked up in Microsoft's big new focus on security. "This case study serves as a critical reminder that as we build new AI-powered systems, we must re-evaluate the impact of classic vulnerabilities, which now have the potential to compromise not just servers, but the 'brains' of AI agents themselves," says Aonan Guan, one of the security researchers (alongside Lei Wang) that reported the flaw to Microsoft. Guan is a senior cloud security engineer at Wyze (yes, that Wyze) but this research was conducted independently. Guan and Wang reported the flaw to Microsoft on May 28th, just weeks after NLWeb was unveiled. Microsoft issued a fix on July 1st, but has not issued a CVE for the issue -- an industry standard for classifying vulnerabilities. The security researchers have been pushing Microsoft to issue a CVE, but the company has been reluctant to do so. A CVE would alert more people to the fix and allow people to track it more closely, even if NLWeb isn't widely used yet. "This issue was responsibly reported and we have updated the open-source repository," says Microsoft spokesperson Ben Hope, in a statement to The Verge. "Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected." Guan says NLWeb users "must pull and vend a new build version to eliminate the flaw," otherwise any public-facing NLWeb deployment "remains vulnerable to unauthenticated reading of .env files containing API keys." While leaking an .env file in a web application is serious enough, Guan argues it's "catastrophic" for an AI agent. "These files contain API keys for LLMs like GPT-4, which are the agent's cognitive engine," says Guan. "An attacker doesn't just steal a credential; they steal the agent's ability to think, reason, and act, potentially leading to massive financial loss from API abuse or the creation of a malicious clone." Microsoft is also pushing ahead with native support for Model Context Protocol (MCP) in Windows, all while security researchers have warned of the risks of MCP in recent months. If the NLWeb flaw is anything to go by, Microsoft will need to take an extra careful approach of balancing the speed of rolling out new AI features versus sticking to security being the number one priority.
[2]
Microsoft's agentic HTML can leak passwords and AI keys, researcher finds
Microsoft has issued a patch, and there's nothing you need to do. With new AI systems comes new AI vulnerabilities, and a big one was just discovered. It's a flaw in Microsoft's method of allowing agents to interact with websites on your behalf. Microsoft calls this technique NLWeb, which is a kind of HTML for AI agents. The company unveiled this at its Build conference this spring, and has since leaned into that vision with an experimental Copilot Mode for its Edge browser. (Microsoft hasn't confirmed whether it uses NLWeb for this.) Researcher Aonan Guan, however, has discovered a vulnerability in NLWeb: a path traversal bug that lets any remote user read sensitive files like system configurations and cloud credentials via a malformed URL. In a Medium post, Guan showed how he was able to download a list of the system passwords along with Google Gemini and OpenAI keys. This would let an attacker run additional server-dependent AI applications "for free," without being charged by OpenAI. According to Guan, Microsoft's Security Response Center pushed a patch to the GitHub repository in June, confirming the problem was fixed. Microsoft hasn't issued an official patch report. Users, however, don't need to take any actions. It's fair to say that AI development has proceeded at breakneck speed. But, as Guan points out, the line between chatting with an AI and issuing it commands can blur. "The very nature of NLWeb is to interpret natural language," Guan said. "This blurs the line between user input and system commands. Future attack vectors could involve crafting sentences that, when parsed by an agent, translate into malicious file paths or actions." We've already seen ChatGPT interactions leak out into Google's search results. (ChatGPT has now reportedly turned off the flag that makes ChatGPT chats discoverable.) As Guan (and The Verge, which reported the story) note, leaks of such magnitude in an AI agent can be catastrophic for all involved.
[3]
Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers -- here's what to know and how to stay safe
Microsoft is quickly heading towards AI agentic browsing -- that much is obvious with Edge's AI makeover and an open project called NLWeb that can be used to give any website AI power. But while this all sounds good on paper, it does open the door to a whole lot of security risks, and the company's agentic aspirations have already been hit by a flaw that is concerningly simple. Fortunately, it has been patched out, but it does start a bigger conversation we need to have about staying safe while agentic browsing. Let's get into it. NLWeb is envisioned as "HTML for the Agentic Web." Announced back at Build 2025, this is the framework for AI browsing on your behalf, but researchers Aonan Guan and Lei Wang found what is called a "path traversal vulnerability". This is a pretty standard security oversight that hackers can take advantage of by having an agentic AI visit a specially-made URL that can grant the attacker access to sensitive files like system configuration files and API keys. What can be done with this information is what can amount to stealing your agent's brain. Attackers at this point can get to the core functions of your AI agent and do a wide-ranging amount of things like look at/interact with emails on your behalf, or even get into your finances. The flaw was found and reported to Microsoft on May 28, 2025, and the company patched it out on July 1, 2025 by updating the open-source repository. It was a simple exposure that had huge problematic potential. "This issue was responsibly reported and we have updated the open-source repository," Microsoft spokesperson Ben Hope told The Verge. "Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected." We've seen a significant shift towards agentic browsing over the last 12 months -- spearheaded by the likes of OpenAI Operator, Opera launching the world's first on-device agentic AI browser, and Rabbit R1's LAM Playground. This serious flaw may have already been patched out by Microsoft, but it's clear that this won't be the last security issue we come across. For example, there's the Model Context Protocol (MCP), which is an open standard launched by Anthropic to allow AI assistants to interact with tools and services on your behalf. Sounds good on paper, but researchers have already identified the risks of account takeover and token theft: when a hacker gains access to personal authentication tokens and essentially gets the keys to your kingdom. So it's clear you need to be extra careful in the agentic era. Here are some key steps you can take:
Share
Copy Link
A critical vulnerability discovered in Microsoft's NLWeb protocol, designed for AI-powered web interactions, has exposed potential security risks in the emerging field of agentic web browsing.
Microsoft's ambitious plan to revolutionize the web with AI-powered interactions has encountered a significant setback. Researchers have uncovered a critical security flaw in the recently unveiled NLWeb protocol, which Microsoft touted as "HTML for the Agentic Web" 1. This protocol, designed to enable ChatGPT-like search capabilities for websites and applications, was introduced just months ago at Microsoft's Build conference.
Source: The Verge
The vulnerability, discovered by security researchers Aonan Guan and Lei Wang, allows remote users to access sensitive files, including system configuration files and API keys for services like OpenAI and Google Gemini 2. What makes this flaw particularly concerning is its simplicity – it's a classic path traversal vulnerability that can be exploited by visiting a malformed URL 1.
The discovery of this security flaw is especially embarrassing for Microsoft, given the company's recent emphasis on security. It raises questions about the thoroughness of Microsoft's security practices in developing new AI-powered systems 1. The vulnerability has potentially far-reaching consequences:
Microsoft has addressed the vulnerability by issuing a patch on July 1st, 2023, updating the open-source repository 3. However, the company has not issued a CVE (Common Vulnerabilities and Exposures) for the issue, which is an industry standard for classifying vulnerabilities 1. This decision has been met with some criticism from security researchers who argue that a CVE would help alert more people to the fix and allow for better tracking.
Source: Tom's Guide
This incident highlights the potential risks associated with the rapid development and deployment of AI-powered web technologies:
The incident serves as a wake-up call for the tech industry as it races to integrate AI into web technologies. Microsoft's push for native support of the Model Context Protocol (MCP) in Windows, despite warnings from security researchers, further underscores the tension between rapid innovation and security concerns 1.
Source: PCWorld
As the landscape of AI-powered web interactions evolves, it's clear that companies will need to strike a careful balance between rolling out new features and maintaining stringent security protocols. The NLWeb vulnerability serves as a critical reminder of the potential risks associated with these emerging technologies and the importance of thorough security testing in the development of AI-powered systems.
Summarized by
Navi
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
21 hrs ago
10 Sources
Technology
21 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
21 hrs ago
11 Sources
Technology
21 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
13 hrs ago
18 Sources
Business
13 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
5 hrs ago
7 Sources
Business
5 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
13 hrs ago
15 Sources
Technology
13 hrs ago