2 Sources
2 Sources
[1]
How AI platforms rank on data privacy in 2025
A new report from Incogni evaluates the data privacy practices of today's most widely used AI platforms. As generative AI and large language models (LLMs) become deeply embedded in everyday tools and services, the risk of unauthorized data collection and sharing has surged. Incogni's researchers analyzed nine leading platforms using 11 criteria to understand which systems offer the most privacy-friendly experience. Their findings reveal significant gaps between transparency, data control, and user protection across the industry. While Gen AI platforms offer clear productivity benefits, they often expose users to complex data privacy risks that are hard to detect. These risks stem from two sources: the data used to train the models and the personal information exposed during user interactions. Most platforms do not clearly communicate what data is collected, how it is used, or whether users can opt out. With LLMs being deployed in products for content creation, search, code generation, and digital assistants, users frequently share sensitive information without realizing it may be retained or used to train future models. Incogni's report addresses this gap by offering a standardized framework to score and rank AI platforms according to their privacy practices. According to Incogni's ranking, Le Chat (Mistral AI) is the least invasive AI platform in terms of data privacy. It limits data collection and performed well across most of the 11 measured criteria. ChatGPT (OpenAI) ranked second, followed by Grok (xAI). These platforms offer relatively clear privacy policies and provide users with a way to opt out of having their data used in model training. At the bottom of the ranking are Meta AI, Gemini (Google), and Copilot (Microsoft). These platforms were found to be the most aggressive in data collection and least transparent about their practices. DeepSeek also performed poorly, particularly in the ability to opt out of model training and in vague policy language. The report delves into several key questions regarding how user data is utilized for model training. Incogni found that some platforms explicitly allow users to opt out of training: ChatGPT, Copilot, Le Chat, and Grok fall into this group. Others, such as Gemini, DeepSeek, Pi AI, and Meta AI, do not appear to provide a way to opt out. Claude (Anthropic) was the only platform that claims to never use user inputs for training. Most platforms share prompts with a defined set of third parties, including service providers, legal authorities, and affiliated companies. However, Microsoft and Meta allow sharing with advertisers or affiliates under broader terms. Anthropic and Meta also disclose sharing with research collaborators. These policies raise questions about the limits of data control once prompts leave the platform. All platforms train their models on publicly accessible data. Many also use user feedback or prompts to improve performance. OpenAI, Meta, and Anthropic provided the most detailed explanations about training data sources, although even these were limited in scope. No platform offered a way for users to remove their personal data from existing training sets. Beyond the policies themselves, Incogni also evaluated how transparent platforms are about their data practices. OpenAI, Mistral, Anthropic, and xAI made it easy to determine how prompts are used for training. These platforms offered searchable support content or detailed FAQ sections. Meta and Microsoft, on the other hand, required users to search through unrelated documentation. DeepSeek, Pi AI, and Google's Gemini offered the least clarity. Platforms were grouped into three levels of transparency. OpenAI, Mistral, Anthropic, and xAI provided accessible documentation. Microsoft and Meta made this information somewhat difficult to find. Gemini, DeepSeek, and Inflection offered limited or fragmented disclosures, requiring users to parse multiple documents to get answers. Incogni used the Dale-Chall formula to assess readability. All policies required at least a college-level reading ability. Meta, Microsoft, and Google provided long and complex privacy documents that covered multiple products. Inflection and DeepSeek offered very short policies that lacked clarity and depth. OpenAI and xAI were noted for offering helpful support articles, though these must be maintained over time to remain accurate. The investigation also uncovered details about what specific data is collected and with whom it might be shared. Meta and DeepSeek share personal information across corporate entities. Meta and Anthropic share information with research partners. In several cases, vague terms like "affiliates" were used, making it unclear who exactly receives user data. Microsoft's policy also permits sharing with advertisers under specific conditions. Most platforms collect data during account setup or user interaction. However, Incogni found evidence that some platforms also gather data from additional sources: Pi AI appears to use the fewest external sources, focusing mainly on direct input and public data. Microsoft stated that it may use data from brokers as well. Incogni also examined how iOS and Android apps collect and share user data. Le Chat had the lowest privacy risk, followed by Pi AI and ChatGPT. Meta AI was the most aggressive, collecting data like usernames, emails, phone numbers, and sharing much of it with third parties. Gemini and Meta AI collect exact user locations. Pi AI, Gemini, and DeepSeek collect phone numbers. Grok shares photos and app interaction data, while Claude shares app usage and email addresses. Interestingly, Microsoft's Copilot Android app claimed not to collect or share any user data. Because this was inconsistent with its iOS app disclosures, Incogni scored both apps based on the iOS version. Privacy risks vary widely between Gen AI platforms. The best performers offered clear privacy policies, opt-out controls, and minimal data collection. The worst offenders lacked transparency and shared user data broadly without clear justification. Incogni concludes that AI platforms must make privacy documentation easier to read, provide modular privacy policies for each product, and avoid relying on broad umbrella policies. Companies should also maintain up-to-date support resources that clearly answer data handling questions in plain language.
[2]
Meta, Copilot Flunk AI Privacy 101 -- Big Tech Is Playing Chicken With Your Privacy - Meta Platforms (NASDAQ:META)
The risk of unauthorized data collection and privacy breaches has surged as generative AI and large language models (LLMs) become embedded in everyday tools and services. Here's a look at some of the least invasive AI platforms, as well as the most aggressive data collectors. What To Know: According to a recent report from Incogni, Mistral AI's Le Chat emerged as the least invasive AI platform in terms of data privacy, limiting data collection and performing well across most criteria. Read Next: Get Ready For 800 Hours Of Blackouts, Trump's DOE Warns ChatGPT (OpenAI) ranked second, followed by Grok (xAI) with both generative AI platforms offering relatively clear privacy policies and the ability for users to opt out of having their data used in model training. At the other end of the spectrum, Meta Platforms, Inc.'s META Meta AI, Alphabet, Inc.'s GOOG GOOGL Gemini and Copilot from Microsoft, Inc. MSFT were found to be the most aggressive in data collection and least transparent about their practices. DeepSeek also performed poorly, particularly regarding opt-out options and policy clarity. Anthropic and Meta also disclose sharing with research collaborators, which could raise concerns about data control once prompts leave the platform. Mobile app data collection practices were also scrutinized. Le Chat, Pi AI and ChatGPT had the lowest privacy risk on mobile. Meta AI's mobile app was the most aggressive and collected usernames, emails and phone numbers. Gemini and Meta AI collect exact user locations, and Pi AI, Gemini, and DeepSeek collect phone numbers. Grok shares photos and app interaction data, while Anthropic's Claude shares app usage and email addresses. User Beware: Overall, privacy risks vary widely between generative AI platforms. The best performers offered clear privacy policies, opt-out controls and minimal data collection, while the worst offenders lacked transparency and shared user data broadly without clear justification. Users should research generative AI platform privacy policies and understand and use privacy controls in order to limit security risks and maintain control of personal data. Read Next: UnitedHealth Slammed With Another Lawsuit Over $119 Billion Stock Plunge Image: Shutterstock METAMeta Platforms Inc$733.421.77%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum82.84Growth92.02Quality86.36Value26.09Price TrendShortMediumLongOverviewGOOGAlphabet Inc$177.521.35%GOOGLAlphabet Inc$176.501.23%MSFTMicrosoft Corp$503.011.29% This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
A new report from Incogni evaluates the data privacy practices of leading AI platforms, revealing significant gaps in transparency and user protection across the industry. Le Chat, ChatGPT, and Grok top the list for privacy-friendly practices, while Meta AI, Gemini, and Copilot rank at the bottom.
In 2025, as generative AI and large language models (LLMs) become increasingly integrated into everyday tools and services, the risk of unauthorized data collection and privacy breaches has surged. A new report from Incogni has evaluated the data privacy practices of nine leading AI platforms, revealing significant disparities in transparency, data control, and user protection across the industry
1
.Source: Dataconomy
According to Incogni's ranking, Le Chat (Mistral AI) emerged as the least invasive AI platform in terms of data privacy. It limits data collection and performed well across most of the 11 measured criteria. ChatGPT (OpenAI) secured the second position, followed by Grok (xAI)
1
. These platforms offer relatively clear privacy policies and provide users with options to opt out of having their data used in model training.At the bottom of the ranking are Meta AI, Gemini (Google), and Copilot (Microsoft). These platforms were found to be the most aggressive in data collection and least transparent about their practices. DeepSeek also performed poorly, particularly in the ability to opt out of model training and in vague policy language
2
.The report delves into several crucial aspects of how user data is utilized:
Opt-out Options: ChatGPT, Copilot, Le Chat, and Grok allow users to opt out of training. Others, such as Gemini, DeepSeek, Pi AI, and Meta AI, do not appear to provide this option. Claude (Anthropic) claims to never use user inputs for training
1
.Data Sharing: Most platforms share prompts with a defined set of third parties. However, Microsoft and Meta allow sharing with advertisers or affiliates under broader terms. Anthropic and Meta also disclose sharing with research collaborators
1
.Training Data Sources: All platforms train their models on publicly accessible data. Many also use user feedback or prompts to improve performance. OpenAI, Meta, and Anthropic provided the most detailed explanations about training data sources
1
.The report also evaluated the transparency and readability of platform privacy policies:
Transparency Levels: OpenAI, Mistral, Anthropic, and xAI provided easily accessible documentation. Microsoft and Meta made this information somewhat difficult to find. Gemini, DeepSeek, and Inflection offered limited or fragmented disclosures
1
.Readability: All policies required at least a college-level reading ability. Meta, Microsoft, and Google provided long and complex privacy documents covering multiple products
1
.Source: Benzinga
Related Stories
Incogni examined how iOS and Android apps collect and share user data:
1
2
.As generative AI becomes more prevalent in everyday tools, users face complex data privacy risks that are often hard to detect. These risks stem from both the data used to train the models and the personal information exposed during user interactions. Most platforms do not clearly communicate what data is collected, how it is used, or whether users can opt out
1
.Users are advised to research generative AI platform privacy policies, understand and use privacy controls, and limit the sharing of sensitive information to maintain control of their personal data
2
. As the AI landscape continues to evolve, the importance of data privacy and user protection remains a critical concern for both developers and consumers.Summarized by
Navi
[1]
1
Business and Economy
2
Business and Economy
3
Policy and Regulation