6 Sources
6 Sources
[1]
Gartner urges businesses to 'block all AI browsers' - what's behind the dire warning
Automation is useful, but convenience shouldn't replace security. Businesses should steer clear of agentic browsers for now, analysts from research firm Gartner have warned. Agentic browsers, otherwise known as AI browsers, are changing how we use browsers to access the internet, perform search queries, and manage workflows. Also: Scammers are poisoning AI search results to steer you straight into their traps - here's how As AI has now embedded itself into almost every industry, it was only a matter of time before these technologies infiltrated the web browser market. Not only are well-known browser developers exploring the benefits and potential applications of artificial intelligence in search functionality, user assistance, support, and security, but interest in agentic browsers has also led to a crop of smaller outfits appearing, with AI browsers now being offered from organizations including OpenAI and Perplexity. Also: Are AI browsers worth the security risk? Why experts are worried For research purposes, content summarization, and personalization, AI browsers can be beneficial. They can save you time and may increase your efficiency -- but this doesn't mean that every query LLM-based chat assistant's answer is correct, nor does it mean that AI browsers are inherently secure. In an analyst report and advisory dubbed "Cybersecurity Must Block AI Browsers for Now," published earlier this month, as reported by The Register, Gartner analysts said that while agentic browsers have the potential to revolutionize how we interface with websites and perform online activities, they also introduce "critical cybersecurity risks." "CISOs must block all AI browsers in the foreseeable future to minimize risk exposure," the research agency said. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) According to analysts Dennis Xu, Evgeny Mirolyubov, and John Watts, the main issue is that "default AI browser settings prioritize user experience over security." As AI browsers are able to operate independently, interact with websites, and perform tasks on a user's behalf, as well as display content that could prompt the user to take specific actions -- even when this content is malicious -- there are demonstrable risks to their use, in both a consumer and business context. Also: I've been testing the top AI browsers - here's which ones actually impressed me For example, an AI chatbot could unintentionally interact with a malicious website, or an employee might submit confidential, sensitive corporate data to an AI assistant and be unaware of where this information is stored. If an AI's cloud backend is not secure, this could eventually result in a data breach or security incident that places businesses at risk. In addition, Gartner said that employees could be tempted to use AI browsers or assistants to automate repetitive tasks. While AI can benefit workflows and improve efficiency, using it in this way may also have unintended effects -- such as a member of staff using it to complete cybersecurity training while not learning anything at all. Gartner is far from the only organization raising the alarm about the potential cybersecurity implications of AI browsers and assistants. Security experts are watching the AI browser evolution closely and have warned us of many potential cybersecurity challenges involved in regulating them, including the emergence of prompt injection attacks, personal data theft and exposure, and the increased risk of surveillance that compromises our personal security. Gartner recommends that businesses conduct risk assessments on AI solutions as well as any associated backend systems, as such security audits can reveal whether or not AI browsers are deemed an acceptable security risk. Considering how new and untried they are, the answer is likely no. Also: Use an AI browser? 5 ways to protect yourself from prompt injections - before it's too late If this is the case, Gartner said that CISOs should consider blocking AI browser use entirely. However, in either case, employees should be trained on the dangers of relying too heavily on AI -- and why they shouldn't part with sensitive information when they are interacting with AI programs. "While agentic browsers promise many features to enhance user experience, we are still in early stages where the risks are not well understood and default configurations prioritise convenience over security, something we see in many technologies," commented Javvad Malik, lead security awareness advocate at KnowBe4. "However, blanket bans are rarely sustainable long-term strategies. Instead, the focus should be on risk assessments that evaluate the specific AI services powering these browsers. This can allow for measured adoption while maintaining necessary oversight."
[2]
Security Experts Warn Companies to 'Block All AI Browsers Now'
The cybersecurity pushback against AI-powered browsers is escalating, with research giant Gartner and a UK government agency flagging them as potential threats. "Cybersecurity must block AI browsers now," Gartner writes in a new report published last week. "AI browsers are nascent and innovative, yet too risky for general adoption by most organizations." Gartner is sounding the alarms as more tech companies are starting to offer AI browsers that promise to streamline the web-surfing experience. The programs let you outsource and automate tasks, such as online searches or writing an email, to an AI agent. The only problem is that these same AI capabilities can be tricked into executing malicious commands hidden in websites or emails, effectively turning the browser against the user. The security community refers to this threat as "prompt injection attacks," which exploit the fact that AI chatbots can't always distinguish between a user's request and a malicious one. Gartner calls out Perplexity's Comet and OpenAI's Atlas as browsers that companies should be careful with since they can automate various functions. Gartner goes as far as to advise that all chief information security officers (CISOs) "block all AI browsers in the foreseeable future to minimize risk exposure," adding that the default settings "are optimized for end-user experience rather than cybersecurity best practices or privacy." On Monday, the UK's National Cyber Security Centre also warned, "There's a good chance prompt injection will never be properly mitigated" in the same way as SQL injection flaws, which can result from improper computer coding. "The best we can hope for is reducing the likelihood or impact of attacks," the agency wrote on Monday. However, the public can expect pushback from the tech industry, including Microsoft, OpenAI, and Perplexity, which have all been developing safeguards to counter prompt injection attacks. This includes assuming all content read from the web is untrusted and requires user permission before executing more sensitive commands. On Monday, Google announced its own effort to prevent such attacks in Chrome, which taps the Gemini chatbot for its AI capabilities. In our own testing, we've found AI browsers to be underwhelming, with poor performance. So, users should be well aware of the limitations, including the privacy and security compromises.
[3]
Block all AI browsers for the foreseeable future: Gartner
An agentic transaction capability that allows the browser to autonomously navigate, interact with, and complete tasks on websites, especially within authenticated web sessions. Gartner's document warns that AI sidebars mean "Sensitive user data - such as active web content, browsing history, and open tabs - is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed." The document suggests it's possible to mitigate those risks by assessing the back-end AI services that power an AI browser to understand if their security measures present an acceptable risk to your organization. If that process leads to approval for use of a browser's back-end AI, Gartner advises organizations should still "Educate users that anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar to summarize or perform other autonomous actions." Gartner's fears about the agentic capabilities of AI browser relate to their susceptibility to "indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website." The authors also suggest that employees "might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting" and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions. Another scenario they consider is exposing agentic browsers to internal procurement tools, then watching LLMs make mistakes that cause organizations to buy things they don't want or need. "A form could be filled out with incorrect information, a wrong office supply item might be ordered... or a wrong flight might be booked," they imagine. Again, the analysts recommend some mitigations, such as ensuring agents can't use email, as that will limit their ability to perform some actions. They also suggest using settings that ensure AI browsers can't retain data. But overall, the trio of analysts think AI browsers are just too dangerous to use without first conducting risk assessments and suggest that even after that exercise you'll likely end up with a long list of prohibited use cases - and the job of monitoring an AI browser fleet to enforce the resulting policies. ®
[4]
Security Experts Warn Companies to 'Block All AI Browsers Now' - General Chat
The cybersecurity pushback against AI-powered browsers is escalating, with research giant Gartner and a UK government agency flagging them as potential threats. "Cybersecurity must block AI browsers now," Gartner writes in a new report published last week. "AI browsers are nascent and innovative, yet too risky for general adoption by most organizations." Gartner is sounding the alarms as more tech companies are starting to offer AI browsers that promise to streamline the web-surfing experience. The programs let you outsource and automate tasks, such as online searches or writing an email, to an AI agent. The only problem is that these same AI capabilities can be tricked into executing malicious commands hidden in websites or emails, effectively turning the browser against the user. he security community refers to this threat as "prompt injection attacks," which exploit the fact that AI chatbots can't always distinguish between a user's request and a malicious one. Gartner calls out Perplexity's Comet and OpenAI's Atlas as browsers that companies should be careful with since they can automate various functions. Gartner goes as far as to advise that all chief information security officers (CISOs) "block all AI browsers in the foreseeable future to minimize risk exposure," adding that the default settings "are optimized for end-user experience rather than cybersecurity best practices or privacy."
[5]
Gartner advisory states AI browsers are NOT your friend and are putting your business at risk
Employees could use AI browsers to complete cybersecurity training Analyst firm Gartner has advised organizations to block AI browsers from use, warning of the potential for data-exposure or autonomous actions performed by agentic browsers on malicious websites. The main observation given by researcher VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts was that, "Default AI browser settings prioritize user experience over security." AI browsers, such as OpenAI's ChatGPT Atlas, are often employed to boost efficiency by using autonomous navigation, workflows, and data collection - but they can be tricked by malicious webpages into collecting and transferring sensitive information such as bank account details, credentials, and emails. The Gartner analysts summarized an agentic browser to include two key features: Many agentic browsers do not allow for the use of AI functions within a local LLM, which means that user data, ranging from web content to browsing history and open tabs, "is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed." Ultimately, it is down to each individual organization to perform an assessment of the AI browser's back end services to see if they are compliant with an organization's cybersecurity and data protection policies. But even if they pass, they can still be used in a way that presents further risks to the organization. In this case, the user themselves may provide the browser with an unnecessary amount of sensitive information just by having sensitive data open in the same web browser window while using the browser's AI assistant. Furthermore, as agentic browsers can complete actions autonomously, Gartner warns that employees could be "tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting," such as cybersecurity training. Gartner suggests that organizations who continue to use agentic browsers should "Educate users that anything they are viewing could potentially be sent to the AI service back-end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser's sidebar to summarize or perform other autonomous actions." Commenting on the Gartner advisory, Javvad Malik, Lead Security Awareness Advocate at KnowBe4 said: "AI features have introduced tension in cybersecurity, requiring people to assess the trade-off between productivity and security risks. While agentic browsers promise many features to enhance user experience, we are still in early stages where the risks are not well understood and default configurations prioritize convenience over security, something we see in many technologies." "However, blanket bans are rarely sustainable long-term strategies. Instead, the focus should be on risk assessments that evaluate the specific AI services powering these browsers. This can allow for measured adoption while maintaining necessary oversight. As we find more and more AI agents making their way into every aspect of technology, organizations need to have playbooks in place to assess, and protect AI agents, and enable them to work within the organization according to their own needs and risk appetite".
[6]
AI Browsers Could Leak Data, Misfire Tasks, Warns Gartner
Analyst firm Gartner warned that AI-powered web browsers pose serious security risks and advised most organisations to block them until they develop proper safeguards. In a new advisory titled "Cybersecurity Must Block AI Browsers for Now," the firm argued that these browsers could expose sensitive data, cause automated mistakes, and allow attackers to manipulate them, according to a report by The Register. According to the advisory, AI browsers, such as Perplexity's Comet and OpenAI's ChatGPT Atlas, combine an AI sidebar with autonomous "agentic" capabilities. These features allow a browser to summarise or translate web pages, and in some cases, automatically perform tasks like filling forms, navigating websites, or making purchases while logged into private accounts. Gartner analysts Dennis Xu, Evgeny Mirolyubov, and John Watts warned in the report that "prioritise user experience over security." The advisory says that AI sidebars routinely send active browser content, such as open tabs, page data, and browsing history, to cloud-based AI systems operated by browser developers. This, Gartner notes, can result in unintentional exposure of corporate information if security settings are not configured properly. The document explains that organisations can theoretically reduce the risk by evaluating how securely each AI service handles user data. Even when organisations clear an AI browser's back-end for use, the analysts advise that employees should assume the browser could transmit anything displayed on their screen to an external AI system. A larger concern, according to Gartner, is the possibility of autonomous browser actions going wrong. The report flags multiple risks: The analysts also said employees could misuse AI browsers to avoid completing required tasks, adding that some workers might instruct the browser to finish mandatory cybersecurity training on their behalf. The report imagines that autonomous systems inside corporate environments might even make procurement errors. It warns that "A form could be filled out with incorrect information, a wrong office supply item might be ordered... or a wrong flight might be booked." While Gartner lists some partial protections, such as disabling email access for AI agents, restricting what they can store, and applying strict settings, the advisory concludes that these steps may not be enough. The analysts recommend that organisations block AI browsers unless they complete a detailed risk assessment. Even when a company approves a browser, the analysts expect it will have to enforce usage policies and monitor AI browser activity. Gartner's warning comes amid increasing questions about the safety of agentic AI systems, which can take independent actions on the internet. Some technology companies argue these systems increase productivity, but critics say they also introduce unpredictable behaviour and new attack surfaces. For now, Gartner clearly urges organisations to act cautiously, review the risks thoroughly, and block AI browsers until they better understand their security implications. In India, regulators have also started assessing the broader risks posed by AI and agentic systems. The Competition Commission of India (CCI) recently told a Parliamentary panel that it is studying how AI could enable "algorithmic collusion, data monopolisation, and anti-competitive market dynamics." The Ministry of Corporate Affairs informed the committee that "Agentic AI is an evolving concept" and said the CCI's ongoing market study on AI and competition is in its "advanced stage". India's proposed Digital Competition Bill (DCB) focuses on nine core digital services but currently does not cover AI agents or virtual assistants, an omission the Parliamentary Standing Committee has flagged. The Committee recommended expanding the bill's scope to align it with global practices and bring advanced AI systems, including agentic tools, under regulatory oversight. The warnings come at a time when major AI companies are launching their own browsers to gain more control over users' online activity. OpenAI recently introduced Atlas, while Perplexity launched Comet. These browsers integrate AI at the system level, allowing tasks like summarising emails, generating responses, and analysing pages directly inside the browser. The move is driven by distribution, data access, and competition. Browsers are the most-used apps on phones and give AI companies direct visibility into what people read, click, or search. According to MediaNama Editor Nikhil Pahwa, controlling the browser helps AI firms become the user's default AI assistant and reduces dependence on Google's Chrome ecosystem. The report also notes that Google is shifting its search interface towards "AI Mode", limiting traditional search results. AI companies fear losing visibility in Chrome, and building their own browsers helps them hedge against this shift. These AI-native browsers also double as workspaces, performing actions across email, calendars, and websites, blurring the line between browser and operating system. However, users may hesitate to trust AI browsers with such deep access to personal data, especially when these tools track behaviour across the web for model improvement. With AI browsers positioning themselves as alternatives to Chrome, Safari, and Edge, Gartner's warning adds a layer of security concern to what has already become a highly competitive and fast-changing browser market.
Share
Share
Copy Link
Research giant Gartner has issued a stark warning to organizations: block all AI browsers immediately. The advisory highlights critical cybersecurity risks posed by agentic browsers from OpenAI, Perplexity, and others, including prompt injection attacks and sensitive data exposure. With the UK's National Cyber Security Centre echoing similar concerns, businesses face pressure to prioritize security over the convenience these tools promise.
Research firm Gartner has delivered a stark message to businesses worldwide: block all AI browsers for the foreseeable future. In a report titled "Cybersecurity Must Block AI Browsers for Now," analysts Dennis Xu, Evgeny Mirolyubov, and John Watts warned that while agentic browsers promise to streamline workflows and boost efficiency, they introduce critical cybersecurity risks that organizations cannot afford to ignore
1
. The advisory comes as tech companies including OpenAI and Perplexity roll out AI-powered browsers designed to automate tasks and enhance the web-surfing experience2
.
Source: TechRadar
The core concern centers on how these AI browsers operate. Chief Information Security Officers (CISOs) are being urged to take immediate action because "default AI browser settings prioritize user experience over security," according to Gartner analysts
1
. This fundamental design choice creates vulnerabilities that could expose organizations to data breaches and security incidents. The UK's National Cyber Security Centre has reinforced these concerns, stating that "there's a good chance prompt injection will never be properly mitigated" in the same way as other coding vulnerabilities2
.The threat of prompt injection attacks represents one of the most significant dangers posed by AI browsers. These attacks exploit a fundamental weakness in how AI agents process information: they struggle to distinguish between legitimate user requests and malicious commands hidden in websites or emails
2
. An AI chatbot could unintentionally interact with a malicious website, effectively turning the browser against the user and executing harmful actions without explicit authorization1
.
Source: PC Magazine
Gartner specifically called out Perplexity's Comet and OpenAI's Atlas as browsers that companies should approach with caution due to their ability to automate various functions
2
. The sensitive data exposure risk extends beyond direct attacks. Many agentic browsers send user data—including active web content, browsing history, and open tabs—to a cloud backend for processing3
. If an employee submits confidential corporate data to an AI assistant without understanding where this information is stored, and if the cloud backend lacks adequate security measures, the organization faces potential data breaches1
.The autonomous nature of agentic browsers introduces another layer of complexity. These tools can independently navigate websites, interact with authenticated web sessions, and complete tasks on behalf of users
3
. While this automation can improve productivity, Gartner analysts warn that employees might be tempted to use AI browsers to automate mandatory but repetitive tasks—including cybersecurity training sessions—without actually learning the material1
5
.
Source: MediaNama
The potential for LLMs to make costly mistakes adds another dimension to the risk profile. Gartner analysts envision scenarios where AI agents exposed to internal procurement tools could fill out forms with incorrect information, order wrong office supplies, or book incorrect flights
3
. These errors, while seemingly minor, could accumulate into significant operational and financial problems for organizations.Related Stories
Gartner recommends that businesses conduct thorough risk assessments on AI solutions and their associated backend systems before allowing their use
1
. These security audits should evaluate whether the specific AI services powering these browsers meet an organization's cybersecurity and privacy standards5
. Even if an AI browser passes initial security evaluations, organizations must educate users that anything they view could potentially be sent to the AI service backend, ensuring employees don't have highly sensitive data active in browser tabs while using AI features3
.Javvad Malik, Lead Security Awareness Advocate at KnowBe4, offered a nuanced perspective on the situation. While acknowledging that blanket bans are rarely sustainable long-term strategies, he emphasized that "we are still in early stages where the risks are not well understood and default configurations prioritize convenience over security"
1
. The tech industry, including Microsoft, OpenAI, and Perplexity, has been developing safeguards to counter these threats, including treating all web content as untrusted and requiring user permission before executing sensitive commands2
. Google recently announced its own effort to prevent such attacks in Chrome, leveraging the Gemini chatbot for AI capabilities2
. However, the fundamental question remains: can these safeguards evolve quickly enough to address the rapidly expanding threat landscape that AI browsers create?Summarized by
Navi
[3]
[4]
1
Policy and Regulation

2
Technology

3
Technology
