AI Browsers Face Critical Security Crisis as Prompt Injection Attacks Expose User Data

Reviewed byNidhi Govil

6 Sources

Share

New AI-powered browsers from OpenAI, Perplexity, and Opera are vulnerable to prompt injection attacks that can steal user data, access sensitive accounts, and execute malicious code. Security researchers warn these browsers are failing to detect over 90% of phishing attempts.

AI Browser Security Crisis Unfolds

The rollout of AI-powered browsers has introduced a new category of cybersecurity threats that researchers warn could fundamentally compromise user safety online. Major technology companies including OpenAI, Perplexity, and Opera have launched AI browsers featuring autonomous agents capable of browsing, summarizing content, and taking actions on behalf of users

1

. However, security testing has revealed critical vulnerabilities that expose users to unprecedented risks.

Source: MakeUseOf

Source: MakeUseOf

Prompt Injection Attacks Target AI Agents

The primary threat facing AI browsers comes from prompt injection attacks, where malicious instructions are embedded in web content to hijack AI agents

2

. These attacks can be executed through various methods, including invisible text on websites, hidden commands in images, and fake CAPTCHA screens designed specifically to fool AI systems.

Source: NBC News

Source: NBC News

Researchers at Brave Software discovered that Opera's Neon browser could be compromised by simply including invisible text on a webpage

4

. When users asked the AI agent to summarize such sites, hidden instructions could trigger the agent to access user accounts and exfiltrate email addresses to attackers.

Security firm LayerX conducted comprehensive testing of multiple AI browsers and found alarming failure rates in detecting malicious content . OpenAI's Atlas browser stopped only 5.8% of malicious web pages, while Perplexity's Comet browser managed just 7% detection rate. In contrast, traditional browsers like Edge and Chrome blocked 53% and 47% of attacks respectively.

Cross-Site Vulnerabilities Expose User Data

Beyond prompt injection, AI browsers face additional security challenges through cross-site request forgery attacks

2

. These attacks allow malicious websites to send commands to AI agents as if they were authenticated users, potentially accessing sensitive data across multiple sessions and devices.

Researchers demonstrated that attackers could manipulate ChatGPT's memory function through these vulnerabilities, creating persistent compromises that affect users across different browsing sessions

2

. This represents a significant escalation from traditional browser security threats.

Extension-Based Attacks Create New Vectors

Security researchers from SquareX identified another attack vector through malicious browser extensions that can overlay fake AI sidebars onto legitimate browsing interfaces

3

. These counterfeit interfaces can intercept user inputs and return malicious instructions while appearing completely legitimate to users.

Source: TechRadar

Source: TechRadar

The attack technique uses standard extension permissions to inject JavaScript into web pages, creating overlays that capture user actions and credentials

3

. Because these extensions request commonly granted permissions, traditional security measures struggle to detect the deceptive overlays.

Industry Response and Ongoing Challenges

OpenAI's Chief Information Security Officer Dane Stuckey acknowledged prompt injection as "a frontier, unsolved security problem" and warned that adversaries will invest significant resources in exploiting these vulnerabilities

4

. The company has implemented red-teaming exercises to identify vulnerabilities before public release, but researchers continue discovering new attack vectors.

Professor Hamed Haddadi from Imperial College London emphasized that the rapid market deployment of AI browsers has created "a vast attack surface" without adequate security testing

1

. The competitive pressure to release AI browser features quickly has resulted in insufficient security validation, according to cybersecurity experts.

Security researchers recommend treating AI browser assistants as experimental features and avoiding sensitive data handling through these platforms until stronger security measures are implemented

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo