

2 Sources
2 Sources
[1]

AI browsers are a cybersecurity time bomb
Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and a "Copilot Mode" for Edge. They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it hints at a more convenient, hands-off future where your browser does lots of your thinking for you. That future could also be a minefield of new vulnerabilities and data leaks, cybersecurity experts warn. The signs are already here, and researchers tell The Verge the chaos is only just getting started. Atlas and Copilot Mode are part of a broader land grab to control the gateway to the internet and to bake AI directly into the browser itself. That push is transforming what were once standalone chatbots on separate pages or apps into the very platform you use to navigate the web. They're not alone. Established players are also in the race, such as Google, which is integrating its Gemini AI model into Chrome; Opera, which launched Neon; and The Browser Company, with Dia. Startups are also keen to stake a claim, such as AI startup Perplexity -- best known for its AI-powered search engine, which made its AI-powered browser Comet freely available to everyone in early October -- and Sweden's Strawberry, which is still in beta and actively going after "disappointed Atlas users." In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas allowing attackers to take advantage of ChatGPT's "memory" to inject malicious code, grant themselves access privileges, or deploy malware. Flaws discovered in Comet could allow attackers to hijack the browser's AI with hidden instructions. Perplexity, through a blog, and OpenAI's chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a "frontier" problem that has no firm solution. "Despite some heavy guardrails being in place, there is a vast attack surface," says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we're seeing is just the tip of the iceberg. With AI browsers, the threats are numerous. Foremost, they know far more about you and are "much more powerful than traditional browsers," says Yash Vekaria, a computer science researcher at UC Davis. Even more than standard browsers, Vekaria says "there is an imminent risk from being tracked and profiled by the browser itself." AI "memory" functions are designed to learn from everything a user does or shares, from browsing to emails to searches, as well as conversations with the built-in AI assistant. This means you're probably sharing far more than you realise and the browser remembers it all. The result is "a more invasive profile than ever before," Vekaria says. Hackers would quite like to get hold of that information, especially if coupled with stored credit card details and login credentials often found on browsers. Another threat is inherent to the rollout of any new technology. No matter how careful developers are, there will inevitably be weaknesses hackers can exploit. This could range from bugs and coding errors that accidentally reveal sensitive data to major security flaws that could let hackers gain access to your system. "It's early days, so expect risky vulnerabilities to emerge," says Lukasz Olejnik, an independent cybersecurity researcher and visiting senior research fellow at King's College London. He points to the "early Office macro abuses, malicious browser extensions, and mobiles prior to [the] introduction of permissions" as examples of previous security issues linked to the rollout of new technologies. "Here we go again." Some vulnerabilities are never found -- sometimes leading to devastating zero-day attacks, named as there are zero days to fix the flaw -- but thorough testing can slash the number of potential problems. With AI browsers, "the biggest immediate threat is the market rush," Haddadi says. "These agentic browsers have not been thoroughly tested and validated." But AI browsers' defining feature, AI, is where the worst threats are brewing. The biggest challenge comes with AI agents that act on behalf of the user. Like humans, they're capable of visiting suspect websites, clicking on dodgy links, and inputting sensitive information into places sensitive information shouldn't go, but unlike some humans, they lack the learned common sense that helps keep us safe online. Agents can also be misled, even hijacked, for nefarious purposes. All it takes is the right instructions. So-called prompt injections can range from glaringly obvious to subtle, effectively hidden in plain sight in things like images, screenshots, form fields, emails and attachments, and even something as simple as white text on a white background. Worse yet, these attacks can be very difficult to anticipate and defend against. Automation means bad actors can try and try again until the agent does what they want, says Haddadi. "Interaction with agents allows endless 'try and error' configurations and explorations of methods to insert malicious prompts and commands." There are simply far more chances for a hacker to break through when interacting with an agent, opening up a huge space for potential attacks. Shujun Li, a professor of cybersecurity at the University of Kent, says "zero-day vulnerabilities are exponentially increasing" as a result. Even worse: Li says as the flaw starts with an agent, detection will also be delayed, meaning potentially bigger breaches. It's not hard to imagine what might be in store. Olejnik sees scenarios where attackers use hidden instructions to get AI browsers to send out personal data or steal purchased goods by changing the saved address on a shopping site. To make things worse, Vekaria warns it's "relatively easy to pull off attacks" given the current state of AI browsers, even with safeguards in place. "Browser vendors have a lot of work to do in order to make them more safe, secure, and private for the end users," he says. For some threats, experts say the only real way to keep safe using AI browsers is to simply avoid the marquee features entirely. Li suggests people save AI for "only when they absolutely need it" and know what they're doing. Browsers should "operate in an AI-free mode by default," he says. If you must use the AI agent features, Vekaria advises a degree of hand-holding. When setting a task, give the agent verified websites you know to be safe rather than letting it figure them out on its own. "It can end up suggesting and using a scam site," he warns.
[2]

Hidden browser extensions might be quietly recording every move you make
AI browsers risk turning helpful automation into channels for silent data theft New "agentic" browsers which offer an AI-powered sidebar promise convenience but may widen the window for deceptive attacks, experts have warned. Researchers from browser security firm SquareX found a benign-looking extension can overlay a counterfeit sidebar onto the browsing surface, intercept inputs, and return malicious instructions that appear legitimate. This technique undermines the implicit trust users place in in-browser assistants and makes detection difficult because the overlay mimics standard interaction flows. The attack uses extension features to inject JavaScript into web pages, rendering a fake sidebar that sits above the genuine interface and captures user actions. Reported scenarios include directing users to phishing sites and capturing OAuth tokens through fake file-sharing prompts. It also recommends commands that install remote access backdoors on victims' devices. The consequences escalate quickly when these instructions involve account credentials or automated workflows. Many extensions request broad permissions, such as host access and storage, that are commonly granted to productivity tools, which reduces the value of permission analysis as a detection method. Conventional antivirus suites and browser permission models were not designed to recognize a deceptive overlay that never modifies the browser code itself. As more vendors integrate sidebars across major browser families, the collective attack surface expands and becomes harder to secure. Users should treat in-browser AI assistants as experimental features and avoid handling sensitive data or authorizing account linkages through them, because doing so can greatly raise the risk of compromise. Security teams should tighten extension governance, implement stronger endpoint controls, and monitor for abnormal OAuth activity to reduce risk. The threat also links directly to identity theft when fraudulent interfaces harvest credentials and session tokens with convincing accuracy. Agentic browsers introduce new convenience while also creating new vectors for social engineering and technical abuse. Therefore, vendors need to build interface integrity checks, improve extension vetting, and provide clearer guidance about acceptable use. Until those measures are widely established and audited, users and organizations should remain skeptical about trusting sidebar agents with any tasks involving sensitive accounts. Security teams and vendors must prioritize practical mitigations, including mandatory code audits for sidebar components and transparent update logs that users and administrators can review regularly. Via BleepingComputer
Share
Share
Copy Link
New AI-powered browsers from OpenAI, Microsoft, and other companies are introducing significant security vulnerabilities, including prompt injection attacks and malicious extensions that can steal user data and hijack browser functions.
The race to integrate artificial intelligence into web browsers has taken a dangerous turn, with cybersecurity experts warning of a "cybersecurity time bomb" as major vulnerabilities emerge across multiple AI-powered browsing platforms
1
. Recent weeks have seen the discovery of critical security flaws in prominent AI browsers, including OpenAI's ChatGPT Atlas, Microsoft's Copilot Mode for Edge, and Perplexity's Comet browser.
Source: The Verge
Researchers have uncovered vulnerabilities in Atlas that allow attackers to exploit ChatGPT's memory function to inject malicious code, grant unauthorized access privileges, and deploy malware
1
. Similarly, flaws in Comet could enable attackers to hijack the browser's AI with hidden instructions, fundamentally compromising the user experience.The current security crisis stems from a broader industry push to control the gateway to the internet by embedding AI directly into browsers
1
. Beyond OpenAI and Microsoft, established players like Google are integrating Gemini AI into Chrome, Opera has launched Neon, and The Browser Company has introduced Dia. Startups are also competing aggressively, with Perplexity making its Comet browser freely available and Sweden's Strawberry targeting "disappointed Atlas users."This rapid expansion has created what Hamed Haddadi, professor at Imperial College London and chief scientist at Brave, describes as "a vast attack surface" despite heavy guardrails being in place
1
.A particularly concerning threat has emerged through malicious browser extensions that can create convincing fake AI sidebars
2
. Researchers from SquareX discovered that benign-looking extensions can overlay counterfeit sidebars onto browsing surfaces, intercept user inputs, and return malicious instructions that appear legitimate.
Source: TechRadar
These attacks use extension features to inject JavaScript into web pages, rendering fake sidebars that capture user actions while mimicking standard interaction flows
2
. The technique can direct users to phishing sites, capture OAuth tokens through fake file-sharing prompts, and recommend commands that install remote access backdoors.Related Stories
AI browsers pose unprecedented privacy risks due to their extensive data collection capabilities. Yash Vekaria, a computer science researcher at UC Davis, notes that these browsers are "much more powerful than traditional browsers" and create "an imminent risk from being tracked and profiled"
1
.AI memory functions are designed to learn from everything users do, including browsing habits, emails, searches, and conversations with built-in AI assistants. This creates what Vekaria describes as "a more invasive profile than ever before," which becomes particularly valuable to hackers when coupled with stored credit card details and login credentials
1
.Cybersecurity experts point to the rapid market deployment as a primary concern. Lukasz Olejnik, an independent cybersecurity researcher at King's College London, warns that "it's early days, so expect risky vulnerabilities to emerge," drawing parallels to previous security issues with Office macros, malicious browser extensions, and early mobile platforms
1
.Hamed Haddadi identifies "the market rush" as the biggest immediate threat, noting that "these agentic browsers have not been thoroughly tested and validated"
1
. This rushed deployment increases the likelihood of zero-day attacks, where vulnerabilities remain undiscovered until exploited maliciously.Summarized by

Navi
[1]