4 Sources
[1]
Does your generative AI protect your privacy? New study ranks them best to worst
Le Chat and Grok are the most respectful of your privacy. So which ones are the worst offenders? Most generative AI companies rely on user data to train their chatbots. For that, they may turn to public or private data. Some services are less invasive and more flexible at scooping up data from their users. Others, not so much. A new report from data removal service Incogni looks at the best and the worst of AI when it comes to respecting your personal data and privacy. For its report "Gen AI and LLM Data Privacy Ranking 2025," Incogni examined nine popular generative AI services and applied 11 different criteria to measure their data privacy practices. The criteria covered the following questions: The providers and AIs included in the research were Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI did well with some questions and not as well with others. Also: Want AI to work for your business? Then privacy needs to come first As one example, Grok earned a good grade for how clearly it conveys that prompts are used for training, but didn't do so well on the readability of its privacy policy. As another example, the grades given to ChatGPT and Gemini for their mobile app data collection differed quite a bit between the iOS and Android versions. Across the group, however, Le Chat took top prize as the most privacy-friendly AI service. Though it lost a few points for transparency, it still fared well in that area. Plus, its data collection is limited, and it scored high points on other AI-specific privacy issues. ChatGPT ranked second. Incogni researchers were slightly concerned with how OpenAI's models are trained and how user data interacts with the service. But ChatGPT clearly presents the company's privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Grok came in third place, followed by Claude and PI. Each had trouble spots in certain areas, but overall did fairly well at respecting user privacy. "Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni said in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy." As for the bottom half of the list, DeepSeek took the sixth spot, followed by Copilot, and then Gemini. That left Meta AI in last place, rated the least privacy-friendly AI service of the bunch. Also: How Apple plans to train its AI on your data without sacrificing your privacy Copilot scored the worst of the nine services based on AI-specific criteria, such as what data is used to train the models and whether user conversations can be used in the training. Meta AI took home the worst grade for its overall data collection and sharing practices. "Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft)," Incogni said. "Gemini, DeepSeek, Pi AI, and Meta AI don't seem to allow users to opt out of having prompts used to train the models." In its research, Incogni found that the AI companies share data with different parties, including service providers, law enforcement, member companies of the same corporate group, research partners, affiliates, and third parties. "Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni said in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators." With some services, you can prevent your prompts from being used to train the models. This is the case with ChatGPT, Copilot, Mistral AI, and Grok. With other services, however, stopping this type of data collection doesn't seem to be possible, according to their privacy policies and other resources. These include Gemini, DeepSeek, Pi AI, and Meta AI. On this issue, Anthropic said that it never collects user prompts to train its models. Also: Your data's probably not ready for AI - here's how to make it trustworthy Finally, a transparent and readable privacy policy goes a long way toward helping you figure out what data is being collected and how to opt out. "Having an easy-to-use, simply written support section that enables users to search for answers to privacy related questions has shown itself to drastically improve transparency and clarity, as long as it's kept up to date," Incogni said. "Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from having a single privacy policy covering all of their products and a long privacy policy doesn't necessarily mean it's easy to find answers to users' questions."
[2]
Generative AI and privacy are best frenemies - a new study ranks the best and worst offenders
Le Chat and Grok are the most respectful of your privacy. So which ones are the worst offenders? Most generative AI companies rely on user data to train their chatbots. For that, they may turn to public or private data. Some services are less invasive and more flexible at scooping up data from their users. Others, not so much. A new report from data removal service Incogni looks at the best and the worst of AI when it comes to respecting your personal data and privacy. For its report "Gen AI and LLM Data Privacy Ranking 2025," Incogni examined nine popular generative AI services and applied 11 different criteria to measure their data privacy practices. The criteria covered the following questions: The providers and AIs included in the research were Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI did well with some questions and not as well with others. Also: Want AI to work for your business? Then privacy needs to come first As one example, Grok earned a good grade for how clearly it conveys that prompts are used for training, but didn't do so well on the readability of its privacy policy. As another example, the grades given to ChatGPT and Gemini for their mobile app data collection differed quite a bit between the iOS and Android versions. Across the group, however, Le Chat took top prize as the most privacy-friendly AI service. Though it lost a few points for transparency, it still fared well in that area. Plus, its data collection is limited, and it scored high points on other AI-specific privacy issues. ChatGPT ranked second. Incogni researchers were slightly concerned with how OpenAI's models are trained and how user data interacts with the service. But ChatGPT clearly presents the company's privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Grok came in third place, followed by Claude and PI. Each had trouble spots in certain areas, but overall did fairly well at respecting user privacy. "Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni said in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy." As for the bottom half of the list, DeepSeek took the sixth spot, followed by Copilot, and then Gemini. That left Meta AI in last place, rated the least privacy-friendly AI service of the bunch. Also: How Apple plans to train its AI on your data without sacrificing your privacy Copilot scored the worst of the nine services based on AI-specific criteria, such as what data is used to train the models and whether user conversations can be used in the training. Meta AI took home the worst grade for its overall data collection and sharing practices. "Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft)," Incogni said. "Gemini, DeepSeek, Pi AI, and Meta AI don't seem to allow users to opt out of having prompts used to train the models." In its research, Incogni found that the AI companies share data with different parties, including service providers, law enforcement, member companies of the same corporate group, research partners, affiliates, and third parties. "Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni said in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators." With some services, you can prevent your prompts from being used to train the models. This is the case with ChatGPT, Copilot, Mistral AI, and Grok. With other services, however, stopping this type of data collection doesn't seem to be possible, according to their privacy policies and other resources. These include Gemini, DeepSeek, Pi AI, and Meta AI. On this issue, Anthropic said that it never collects user prompts to train its models. Also: Your data's probably not ready for AI - here's how to make it trustworthy Finally, a transparent and readable privacy policy goes a long way toward helping you figure out what data is being collected and how to opt out. "Having an easy-to-use, simply written support section that enables users to search for answers to privacy related questions has shown itself to drastically improve transparency and clarity, as long as it's kept up to date," Incogni said. "Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from having a single privacy policy covering all of their products and a long privacy policy doesn't necessarily mean it's easy to find answers to users' questions."
[3]
Forget ChatGPT and Gemini -- this lesser-known chatbot just ranked No. 1 for privacy
If you use AI every single day, you are likely giving up a lot of personal data, more than you might realize. It has not always been entirely clear which of the AI chatbots are best when it comes to your privacy. While there are some options that have never exactly pretended to be too worried about privacy (looking at you Deepseek), others sit in somewhat murky waters. Well, now we have a better understanding thanks to a new report, which ranks AI and large language models based on their data privacy. This includes 9 of the biggest AI systems, including all of the names you'll know well, and some other lesser-known ones, too. Not only does the report provide a No. 1 option for privacy (a surprising one at that), but it also ranks them based on a number of more specific privacy categories. So which is the best AI chatbot for your privacy? It's Le Chat. Not heard of it? You're not alone. While Mistral has built up a cult following, it hasn't had the same commercial success as the likes of OpenAI or Deepseek. The French AI company was founded in 2023 and has quickly made a mark. It is funded by Microsoft and was founded by three French AI researchers, including a former employee of Google DeepMind. According to the research, Le Chat is limited in its data collections and, unlike most of its competitors, is incredibly limited in who it will share data with. While Le Chat doesn't have the same financial backing or amount of testing data as the likes of OpenAI, it is a rapidly growing option. In our testing, we've been especially impressed with its speed of response. It does, however, struggle with more detailed responses. It's good news for the world's most popular chatbot. ChatGPT landed just behind Le Chat in the rankings. While Le Chat can only share user prompts with service providers, OpenAI can also share them with affiliates. OpenAI was, however, the highest rated in terms of transparency on data privacy and scored highly for its low level of data collection. On the opposite end of the spectrum, Meta was the least private AI chatbot in 9th, followed quickly by Gemini at 8th and Copilot just behind it at 7th. Deepseek fell in 6th place, and Claude came 4th. In terms of data collection and sharing, Meta AI was the worst one by quite some way, almost doubling the score of the next worst, Gemini. If privacy is a big concern for you when it comes to AI, the good news is that plenty of great AI chatbots scored well here. Mistral is a great option if you're willing to try something new, but equally, ChatGPT is just behind it across the board. Two of the other biggest competitors came just behind with Grok in 3rd and Anthropic's Claude in 4th. All four of these are not only scoring high on privacy tests but also happen to be some of the best-performing AI chatbots available right now. It is surprising to see such big names like Meta AI, Gemini, and Copilot so far down the list. The report explains that this is mostly down to how much data they share and how unclear their data privacy policies are.
[4]
Which AI chatbot is the best at protecting your privacy?
A new study has found the best AI model to protect your data, and it comes from Europe. Mistral AI's Le Chat is the least privacy-invasive generative artificial intelligence model when it comes to data privacy, a new analysis has found. Incogni, a personal information removal service, used a set of 11 criteria to assess the various privacy risks with large language models (LLMs), including OpenAI's ChatGPT, Meta AI, Google's Gemini, Microsoft's Copilot, xAI's Grok, Anthropic's Claude, Inflection AI's Pi AI and China-based DeepSeek. Each platform was then scored from zero, being the most privacy-friendly to one, being the least-friendly on that list of criteria. The research aimed to identify how the models are trained, their transparency, and how data is collected and shared. Among the criteria, the study looked at the data set used by the models, whether user-generated prompts could be used for training and what data, if any, could be shared with third parties. The analysis showed that French company Mistral AI's so-called Le Chat model is the least privacy-invasive platform because it collects "limited" personal data and does well on AI-specific privacy concerns. Le Chat is also one of the few AI assistant chatbots in the study that would only provide user-generated prompts to its service providers, along with Pi AI. OpenAI's ChatGPT comes second in the overall ranking because the company has a "clear" privacy policy that explains to users exactly where their data is going. However, the researchers noted some concerns about how the models are trained and how user data "interacts with the platform's offerings". xAI, the company run by billionaire Elon Musk that operates Grok, came in third place because of transparency concerns and the amount of data collected. Meanwhile, Anthropic's Claude model performed similarly to xAI but had more concerns about how models interact with user data, the study said. At the bottom of the ranking is Meta AI, which was the most privacy invasive, followed by Gemini and Copilot. Many of the companies at the bottom of the ranking don't seem to let users opt out of having prompts that they generated used to further train their models, the analysis said.
Share
Copy Link
A new study by Incogni ranks popular AI chatbots based on their privacy practices, with Mistral AI's Le Chat emerging as the most privacy-friendly and Meta AI as the least.
A new study conducted by data removal service Incogni has shed light on the privacy practices of popular generative AI chatbots. The report, titled "Gen AI and LLM Data Privacy Ranking 2025," evaluated nine AI services using 11 different criteria to measure their data privacy practices 12.
Mistral AI's Le Chat emerged as the most privacy-friendly AI service, earning the top spot in the rankings. The French AI company, founded in 2023, impressed researchers with its limited data collection and high scores on AI-specific privacy issues 13. Le Chat's privacy-conscious approach includes sharing user prompts only with service providers, setting it apart from many competitors 4.
OpenAI's ChatGPT secured the second position, praised for its clear presentation of privacy policies and transparent communication about data usage. The platform also offers users clear ways to limit the use of their data 12. Following closely in third place was xAI's Grok, which performed well despite some transparency concerns 3.
Source: euronews
Anthropic's Claude and Inflection AI's Pi rounded out the top five, demonstrating relatively good respect for user privacy despite some areas of concern 12. DeepSeek took the sixth spot, falling in the middle of the pack 1.
Interestingly, AI platforms developed by major tech companies ranked lower in privacy protection. Microsoft's Copilot and Google's Gemini placed seventh and eighth, respectively 12. Copilot received the lowest score based on AI-specific criteria, such as data used for model training and the use of user conversations in training 1.
Meta AI, developed by Facebook's parent company Meta, ranked last and was deemed the least privacy-friendly AI service among those studied. It received the worst grade for overall data collection and sharing practices 123.
Source: Tom's Guide
The study revealed that AI companies share data with various parties, including service providers, law enforcement, corporate group members, research partners, and third parties. Some concerning practices were highlighted, such as Microsoft's policy implying that user prompts may be shared with third-party advertisers 12.
User control over data usage varies significantly among the services. ChatGPT, Copilot, Mistral AI, and Grok allow users to prevent their prompts from being used for model training. However, Gemini, DeepSeek, Pi AI, and Meta AI do not seem to offer this option 12.
Source: ZDNet
The research emphasized the importance of transparent and readable privacy policies. Companies like Microsoft, Meta, and Google were criticized for having a single, lengthy privacy policy covering all their products, making it difficult for users to find specific information 12.
This study highlights the growing importance of privacy considerations in the rapidly evolving field of generative AI. As these technologies become more integrated into daily life, users are encouraged to be more aware of how their data is being collected, used, and shared 34. The findings may also push AI companies to improve their privacy practices to remain competitive and trustworthy in the eyes of privacy-conscious users.
Summarized by
Navi
[2]
Apple forms a new team to develop an in-house AI chatbot and search experience, aiming to compete with ChatGPT and revitalize its AI efforts.
5 Sources
Technology
2 hrs ago
5 Sources
Technology
2 hrs ago
Mental health professionals raise concerns about the growing trend of young people turning to AI chatbots for emotional support, warning of potential risks to mental health and social skills development.
5 Sources
Health
10 hrs ago
5 Sources
Health
10 hrs ago
Perplexity CEO Aravind Srinivas claims their new AI browser, Comet, can automate recruiter and administrative assistant roles with a single prompt, potentially disrupting white-collar jobs.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago
Samsung has announced plans to release a tri-fold smartphone and an XR headset by the end of 2025, showcasing its commitment to innovative form factors and AI-powered devices.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago
The U.S. Army has consolidated multiple contracts into a single $10 billion deal with Palantir Technologies, streamlining procurement for AI and data integration tools over the next decade.
5 Sources
Business and Economy
2 days ago
5 Sources
Business and Economy
2 days ago