Gartner Urges Businesses to Block All AI Browsers, Citing Critical Cybersecurity Risks

Reviewed byNidhi Govil

6 Sources

Share

Research giant Gartner has issued a stark warning to organizations: block all AI browsers immediately. The advisory highlights critical cybersecurity risks posed by agentic browsers from OpenAI, Perplexity, and others, including prompt injection attacks and sensitive data exposure. With the UK's National Cyber Security Centre echoing similar concerns, businesses face pressure to prioritize security over the convenience these tools promise.

Gartner Issues Urgent Warning on AI Browsers

Research firm Gartner has delivered a stark message to businesses worldwide: block all AI browsers for the foreseeable future. In a report titled "Cybersecurity Must Block AI Browsers for Now," analysts Dennis Xu, Evgeny Mirolyubov, and John Watts warned that while agentic browsers promise to streamline workflows and boost efficiency, they introduce critical cybersecurity risks that organizations cannot afford to ignore

1

. The advisory comes as tech companies including OpenAI and Perplexity roll out AI-powered browsers designed to automate tasks and enhance the web-surfing experience

2

.

Source: TechRadar

Source: TechRadar

The core concern centers on how these AI browsers operate. Chief Information Security Officers (CISOs) are being urged to take immediate action because "default AI browser settings prioritize user experience over security," according to Gartner analysts

1

. This fundamental design choice creates vulnerabilities that could expose organizations to data breaches and security incidents. The UK's National Cyber Security Centre has reinforced these concerns, stating that "there's a good chance prompt injection will never be properly mitigated" in the same way as other coding vulnerabilities

2

.

Understanding Prompt Injection Attacks and Data Theft Risks

The threat of prompt injection attacks represents one of the most significant dangers posed by AI browsers. These attacks exploit a fundamental weakness in how AI agents process information: they struggle to distinguish between legitimate user requests and malicious commands hidden in websites or emails

2

. An AI chatbot could unintentionally interact with a malicious website, effectively turning the browser against the user and executing harmful actions without explicit authorization

1

.

Source: PC Magazine

Source: PC Magazine

Gartner specifically called out Perplexity's Comet and OpenAI's Atlas as browsers that companies should approach with caution due to their ability to automate various functions

2

. The sensitive data exposure risk extends beyond direct attacks. Many agentic browsers send user data—including active web content, browsing history, and open tabs—to a cloud backend for processing

3

. If an employee submits confidential corporate data to an AI assistant without understanding where this information is stored, and if the cloud backend lacks adequate security measures, the organization faces potential data breaches

1

.

Agentic Capabilities Create Unexpected Vulnerabilities

The autonomous nature of agentic browsers introduces another layer of complexity. These tools can independently navigate websites, interact with authenticated web sessions, and complete tasks on behalf of users

3

. While this automation can improve productivity, Gartner analysts warn that employees might be tempted to use AI browsers to automate mandatory but repetitive tasks—including cybersecurity training sessions—without actually learning the material

1

5

.

Source: MediaNama

Source: MediaNama

The potential for LLMs to make costly mistakes adds another dimension to the risk profile. Gartner analysts envision scenarios where AI agents exposed to internal procurement tools could fill out forms with incorrect information, order wrong office supplies, or book incorrect flights

3

. These errors, while seemingly minor, could accumulate into significant operational and financial problems for organizations.

Risk Assessments Offer a Path Forward

Gartner recommends that businesses conduct thorough risk assessments on AI solutions and their associated backend systems before allowing their use

1

. These security audits should evaluate whether the specific AI services powering these browsers meet an organization's cybersecurity and privacy standards

5

. Even if an AI browser passes initial security evaluations, organizations must educate users that anything they view could potentially be sent to the AI service backend, ensuring employees don't have highly sensitive data active in browser tabs while using AI features

3

.

Javvad Malik, Lead Security Awareness Advocate at KnowBe4, offered a nuanced perspective on the situation. While acknowledging that blanket bans are rarely sustainable long-term strategies, he emphasized that "we are still in early stages where the risks are not well understood and default configurations prioritize convenience over security"

1

. The tech industry, including Microsoft, OpenAI, and Perplexity, has been developing safeguards to counter these threats, including treating all web content as untrusted and requiring user permission before executing sensitive commands

2

. Google recently announced its own effort to prevent such attacks in Chrome, leveraging the Gemini chatbot for AI capabilities

2

. However, the fundamental question remains: can these safeguards evolve quickly enough to address the rapidly expanding threat landscape that AI browsers create?

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo