3 Sources
3 Sources
[1]
Security Experts Warn Companies to 'Block All AI Browsers Now'
The cybersecurity pushback against AI-powered browsers is escalating, with research giant Gartner and a UK government agency flagging them as potential threats. "Cybersecurity must block AI browsers now," Gartner writes in a new report published last week. "AI browsers are nascent and innovative, yet too risky for general adoption by most organizations." Gartner is sounding the alarms as more tech companies are starting to offer AI browsers that promise to streamline the web-surfing experience. The programs let you outsource and automate tasks, such as online searches or writing an email, to an AI agent. The only problem is that these same AI capabilities can be tricked into executing malicious commands hidden in websites or emails, effectively turning the browser against the user. The security community refers to this threat as "prompt injection attacks," which exploit the fact that AI chatbots can't always distinguish between a user's request and a malicious one. Gartner calls out Perplexity's Comet and OpenAI's Atlas as browsers that companies should be careful with since they can automate various functions. Gartner goes as far as to advise that all chief information security officers (CISOs) "block all AI browsers in the foreseeable future to minimize risk exposure," adding that the default settings "are optimized for end-user experience rather than cybersecurity best practices or privacy." On Monday, the UK's National Cyber Security Centre also warned, "There's a good chance prompt injection will never be properly mitigated" in the same way as SQL injection flaws, which can result from improper computer coding. "The best we can hope for is reducing the likelihood or impact of attacks," the agency wrote on Monday. However, the public can expect pushback from the tech industry, including Microsoft, OpenAI, and Perplexity, which have all been developing safeguards to counter prompt injection attacks. This includes assuming all content read from the web is untrusted and requires user permission before executing more sensitive commands. On Monday, Google announced its own effort to prevent such attacks in Chrome, which taps the Gemini chatbot for its AI capabilities. In our own testing, we've found AI browsers to be underwhelming, with poor performance. So, users should be well aware of the limitations, including the privacy and security compromises.
[2]
Block all AI browsers for the foreseeable future: Gartner
An agentic transaction capability that allows the browser to autonomously navigate, interact with, and complete tasks on websites, especially within authenticated web sessions. Gartner's document warns that AI sidebars mean "Sensitive user data - such as active web content, browsing history, and open tabs - is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed." The document suggests it's possible to mitigate those risks by assessing the back-end AI services that power an AI browser to understand if their security measures present an acceptable risk to your organization. If that process leads to approval for use of a browser's back-end AI, Gartner advises organizations should still "Educate users that anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar to summarize or perform other autonomous actions." Gartner's fears about the agentic capabilities of AI browser relate to their susceptibility to "indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website." The authors also suggest that employees "might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting" and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions. Another scenario they consider is exposing agentic browsers to internal procurement tools, then watching LLMs make mistakes that cause organizations to buy things they don't want or need. "A form could be filled out with incorrect information, a wrong office supply item might be ordered... or a wrong flight might be booked," they imagine. Again, the analysts recommend some mitigations, such as ensuring agents can't use email, as that will limit their ability to perform some actions. They also suggest using settings that ensure AI browsers can't retain data. But overall, the trio of analysts think AI browsers are just too dangerous to use without first conducting risk assessments and suggest that even after that exercise you'll likely end up with a long list of prohibited use cases - and the job of monitoring an AI browser fleet to enforce the resulting policies. ®
[3]
Gartner advisory states AI browsers are NOT your friend and are putting your business at risk
Employees could use AI browsers to complete cybersecurity training Analyst firm Gartner has advised organizations to block AI browsers from use, warning of the potential for data-exposure or autonomous actions performed by agentic browsers on malicious websites. The main observation given by researcher VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts was that, "Default AI browser settings prioritize user experience over security." AI browsers, such as OpenAI's ChatGPT Atlas, are often employed to boost efficiency by using autonomous navigation, workflows, and data collection - but they can be tricked by malicious webpages into collecting and transferring sensitive information such as bank account details, credentials, and emails. The Gartner analysts summarized an agentic browser to include two key features: Many agentic browsers do not allow for the use of AI functions within a local LLM, which means that user data, ranging from web content to browsing history and open tabs, "is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed." Ultimately, it is down to each individual organization to perform an assessment of the AI browser's back end services to see if they are compliant with an organization's cybersecurity and data protection policies. But even if they pass, they can still be used in a way that presents further risks to the organization. In this case, the user themselves may provide the browser with an unnecessary amount of sensitive information just by having sensitive data open in the same web browser window while using the browser's AI assistant. Furthermore, as agentic browsers can complete actions autonomously, Gartner warns that employees could be "tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting," such as cybersecurity training. Gartner suggests that organizations who continue to use agentic browsers should "Educate users that anything they are viewing could potentially be sent to the AI service back-end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar to summarize or perform other autonomous actions." Commenting on the Gartner advisory, Javvad Malik, Lead Security Awareness Advocate at KnowBe4 said: "AI features have introduced tension in cybersecurity, requiring people to assess the trade-off between productivity and security risks. While agentic browsers promise many features to enhance user experience, we are still in early stages where the risks are not well understood and default configurations prioritize convenience over security, something we see in many technologies." "However, blanket bans are rarely sustainable long-term strategies. Instead, the focus should be on risk assessments that evaluate the specific AI services powering these browsers. This can allow for measured adoption while maintaining necessary oversight. As we find more and more AI agents making their way into every aspect of technology, organizations need to have playbooks in place to assess, and protect AI agents, and enable them to work within the organization according to their own needs and risk appetite".
Share
Share
Copy Link
Research giant Gartner and the UK's National Cyber Security Centre are warning organizations to block AI browsers immediately, citing serious cybersecurity risks. The browsers, which automate web tasks through AI agents, are vulnerable to prompt injection attacks that can trick them into executing malicious commands. Default settings prioritize user experience over security, putting sensitive data at risk.
Research giant Gartner has issued a stark warning to organizations: block all AI browsers immediately. In a report published last week, the firm declared that "AI browsers are nascent and innovative, yet too risky for general adoption by most organizations."
1
The advisory comes as more tech companies roll out AI-powered browsers designed to streamline web surfing by automating tasks like online searches and email writing through AI agents.
Source: TechRadar
The cybersecurity risks stem from fundamental vulnerabilities in how these browsers operate. Gartner specifically calls out browsers like Perplexity's Comet and OpenAI's Atlas, which can automate various functions but remain susceptible to exploitation. The firm recommends that all chief information security officers (CISOs) "block all AI browsers in the foreseeable future to minimize risk exposure."
1
The primary concern centers on prompt injection attacks, which exploit AI chatbots' inability to distinguish between legitimate user requests and malicious commands. These attacks can turn browsers against users by embedding harmful instructions in websites or emails. The UK's National Cyber Security Centre reinforced these concerns on Monday, warning that "there's a good chance prompt injection will never be properly mitigated" in the same way as SQL injection flaws. The agency stated that "the best we can hope for is reducing the likelihood or impact of attacks."
1

Source: PC Magazine
Gartner's document, authored by VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts, highlights how AI browsers are vulnerable to "indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website."
2
A critical observation from the Gartner researchers is that default AI browser settings prioritize user experience over security.
3
AI sidebars mean that "sensitive user data - such as active web content, browsing history, and open tabs - is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed."2
Many agentic browsers lack the ability to use AI functions within a local LLM, forcing all data processing through cloud services. This architecture creates sensitive data exposure risks, particularly when users have confidential information open in browser tabs while using AI assistance features.
3
Related Stories
The agentic transaction capabilities of these browsers allow them to autonomously navigate, interact with, and complete tasks on websites, especially within authenticated web sessions. While this functionality aims to boost efficiency, it opens doors to exploitation. Gartner warns that employees "might be tempted to use AI browsers and automate mandatory tasks that are mandatory, repetitive, and less interesting," such as cybersecurity training sessions.
2
The analysts envision scenarios where LLMs make costly mistakes when exposed to internal procurement tools. "A form could be filled out with incorrect information, a wrong office supply item might be ordered... or a wrong flight might be booked," they warn.
2
Despite these warnings, pushback from the tech industry is expected. Microsoft, OpenAI, and Perplexity have all been developing safeguards to counter prompt injection, including treating all web content as untrusted and requiring user permission before executing sensitive commands. On Monday, Google announced its own effort to prevent such attacks in Chrome, which taps the Gemini chatbot for its AI capabilities.
1
For organizations that choose to proceed with AI browsers despite the risks, Gartner recommends they conduct extensive risk assessments of the back-end AI services powering these browsers. Even after approval, companies should educate users that "anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar."
2
Javvad Malik, Lead Security Awareness Advocate at KnowBe4, noted that "blanket bans are rarely sustainable long-term strategies," suggesting instead that organizations focus on risk assessments that evaluate specific AI services. "As we find more and more AI agents making their way into every aspect of technology, organizations need to have playbooks in place to assess, and protect AI agents, and enable them to work within the organization according to their own needs and risk appetite."
3
The recommendation to automate mandatory tasks through AI browsers may offer productivity gains, but the security trade-offs remain significant as the technology matures and malicious commands become more sophisticated.Summarized by
Navi
[2]
1
Technology

2
Technology

3
Science and Research
