Gartner urges companies to block all AI browsers as cybersecurity risks escalate

Reviewed byNidhi Govil

3 Sources

Share

Research giant Gartner and the UK's National Cyber Security Centre are warning organizations to block AI browsers immediately, citing serious cybersecurity risks. The browsers, which automate web tasks through AI agents, are vulnerable to prompt injection attacks that can trick them into executing malicious commands. Default settings prioritize user experience over security, putting sensitive data at risk.

Gartner Issues Urgent Warning on AI Browsers

Research giant Gartner has issued a stark warning to organizations: block all AI browsers immediately. In a report published last week, the firm declared that "AI browsers are nascent and innovative, yet too risky for general adoption by most organizations."

1

The advisory comes as more tech companies roll out AI-powered browsers designed to streamline web surfing by automating tasks like online searches and email writing through AI agents.

Source: TechRadar

Source: TechRadar

The cybersecurity risks stem from fundamental vulnerabilities in how these browsers operate. Gartner specifically calls out browsers like Perplexity's Comet and OpenAI's Atlas, which can automate various functions but remain susceptible to exploitation. The firm recommends that all chief information security officers (CISOs) "block all AI browsers in the foreseeable future to minimize risk exposure."

1

Prompt Injection Attacks Pose Critical Threat

The primary concern centers on prompt injection attacks, which exploit AI chatbots' inability to distinguish between legitimate user requests and malicious commands. These attacks can turn browsers against users by embedding harmful instructions in websites or emails. The UK's National Cyber Security Centre reinforced these concerns on Monday, warning that "there's a good chance prompt injection will never be properly mitigated" in the same way as SQL injection flaws. The agency stated that "the best we can hope for is reducing the likelihood or impact of attacks."

1

Source: PC Magazine

Source: PC Magazine

Gartner's document, authored by VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts, highlights how AI browsers are vulnerable to "indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website."

2

Default Settings Prioritize User Experience Over Security

A critical observation from the Gartner researchers is that default AI browser settings prioritize user experience over security.

3

AI sidebars mean that "sensitive user data - such as active web content, browsing history, and open tabs - is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed."

2

Many agentic browsers lack the ability to use AI functions within a local LLM, forcing all data processing through cloud services. This architecture creates sensitive data exposure risks, particularly when users have confidential information open in browser tabs while using AI assistance features.

3

Autonomous Actions on Malicious Websites Create New Vulnerabilities

The agentic transaction capabilities of these browsers allow them to autonomously navigate, interact with, and complete tasks on websites, especially within authenticated web sessions. While this functionality aims to boost efficiency, it opens doors to exploitation. Gartner warns that employees "might be tempted to use AI browsers and automate mandatory tasks that are mandatory, repetitive, and less interesting," such as cybersecurity training sessions.

2

The analysts envision scenarios where LLMs make costly mistakes when exposed to internal procurement tools. "A form could be filled out with incorrect information, a wrong office supply item might be ordered... or a wrong flight might be booked," they warn.

2

Tech Industry Pushback and Mitigation Efforts

Despite these warnings, pushback from the tech industry is expected. Microsoft, OpenAI, and Perplexity have all been developing safeguards to counter prompt injection, including treating all web content as untrusted and requiring user permission before executing sensitive commands. On Monday, Google announced its own effort to prevent such attacks in Chrome, which taps the Gemini chatbot for its AI capabilities.

1

For organizations that choose to proceed with AI browsers despite the risks, Gartner recommends they conduct extensive risk assessments of the back-end AI services powering these browsers. Even after approval, companies should educate users that "anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar."

2

Javvad Malik, Lead Security Awareness Advocate at KnowBe4, noted that "blanket bans are rarely sustainable long-term strategies," suggesting instead that organizations focus on risk assessments that evaluate specific AI services. "As we find more and more AI agents making their way into every aspect of technology, organizations need to have playbooks in place to assess, and protect AI agents, and enable them to work within the organization according to their own needs and risk appetite."

3

The recommendation to automate mandatory tasks through AI browsers may offer productivity gains, but the security trade-offs remain significant as the technology matures and malicious commands become more sophisticated.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo