Curated by THEOUTPOST
On Sat, 1 Feb, 12:06 AM UTC
3 Sources
[1]
LinkedIn Lawsuit On User Data Misuse for AI Training Dismissed
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use A class action lawsuit against Microsoft-owned networking platform LinkedIn alleging it of violating the privacy of its users has been dismissed, Reuters reported on January 31. The lawsuit accused LinkedIn of using the data from users' personal messages to train its AI models. It said that the company did this by introducing privacy settings in August and September 2024 that automatically opted people in to share their personal data with third parties. The lawsuit had sought $1,000 per user for the alleged violations. In the latest development, Reuters reported that the plaintiff, Alessandro De La Torre, filed a notice of dismissal without prejudice in the San Jose, California federal court. "LinkedIn has shown us evidence that it did not use their private messages to do that," Eli Wade-Scott, managing partner at Edelson PC, which represented De La Torre, said. The race to train AI models The AI industry in the last 2 years has been abuzz with developments at a dizzying pace. Consider the launch of a Chinese AI model, DeepSeek, just 2 weeks ago. It single-handedly led to huge fluctuations in the market and motivation of the industry in general but especially in the US. The development not only showed the cut-throat competition of the space but also its volatility. The most important part, after taking care of the hardware, for companies to stay ahead in the AI game, remains the availability of a maximum amount of data to train their AI models. Social media sites then become a gold mine for such data training. "What's written there is conversational, something AI chatbots consistently strive to be. Social media posts include human slang that might be useful for the tools to use themselves. And news feeds are generally a source of real-time happenings," CNN noted. Over the last two years, companies simply informed their users that content posted by them on these platforms will be used for training their AI. While some allow you to opt-out of it, some others like Meta (Instagram and Facebook) and Reddit, do not have clear-cut opt-out options. Keeping all this in mind, cases such as asking LinkedIn if they are training their AI on user personal data become crucial for maintaining a vision over what data users have to give up control. Read More:
[2]
LinkedIn Lawsuit Over Use of Customer Data for AI Models Is Dismissed
(Reuters) - A proposed class action accusing Microsoft's LinkedIn of violating the privacy of millions of Premium customers by disclosing their private messages to train generative artificial intelligence models has been dismissed. The plaintiff Alessandro De La Torre on Thursday filed a notice of dismissal without prejudice in the San Jose, California federal court, nine days after suing LinkedIn, and after the company said the lawsuit had no merit. De La Torre accused the business-focused social media platform of breaking a promise to use personal customer data only to improve its services, by sharing customers' messages with third parties involved in AI. The complaint said LinkedIn revealed the unauthorized sharing when it updated its privacy policy in September, and said a new account setting to prevent data sharing would not affect previous AI training. "LinkedIn's belated disclosures here left consumers rightly concerned and confused about what was being used to train AI," Eli Wade-Scott, managing partner at Edelson PC, which represented De La Torre, said in an email on Friday. "Users can take comfort, at least, that LinkedIn has shown us evidence that it did not use their private messages to do that," he added. "We appreciate the professionalism of LinkedIn's team." In a LinkedIn post on Thursday, Sarah Wight, a lawyer and vice president for the company, confirmed that LinkedIn did not disclose customers' private messages for AI training. "We never did that," she said. (Reporting by Jonathan Stempel in New York; Editing by Bill Berkrot)
[3]
LinkedIn lawsuit over use of customer data for AI models is dismissed
(Reuters) - A proposed class action accusing Microsoft's LinkedIn of violating the privacy of millions of Premium customers by disclosing their private messages to train generative artificial intelligence models has been dismissed. The plaintiff Alessandro De La Torre on Thursday filed a notice of dismissal without prejudice in the San Jose, California federal court, nine days after suing LinkedIn, and after the company said the lawsuit had no merit. De La Torre accused the business-focused social media platform of breaking a promise to use personal customer data only to improve its services, by sharing customers' messages with third parties involved in AI. The complaint said LinkedIn revealed the unauthorized sharing when it updated its privacy policy in September, and said a new account setting to prevent data sharing would not affect previous AI training. "LinkedIn's belated disclosures here left consumers rightly concerned and confused about what was being used to train AI," Eli Wade-Scott, managing partner at Edelson PC, which represented De La Torre, said in an email on Friday. "Users can take comfort, at least, that LinkedIn has shown us evidence that it did not use their private messages to do that," he added. "We appreciate the professionalism of LinkedIn's team." In a LinkedIn post on Thursday, Sarah Wight, a lawyer and vice president for the company, confirmed that LinkedIn did not disclose customers' private messages for AI training. "We never did that," she said. (Reporting by Jonathan Stempel in New York; Editing by Bill Berkrot)
Share
Share
Copy Link
A class action lawsuit against LinkedIn, alleging misuse of user data for AI training, has been dismissed after the company provided evidence refuting the claims. The case highlights growing concerns over data privacy in AI development.
A class action lawsuit against Microsoft-owned LinkedIn, accusing the company of misusing user data for AI training, has been dismissed. The plaintiff, Alessandro De La Torre, filed a notice of dismissal without prejudice in the San Jose, California federal court on January 31, 2025 123.
The lawsuit alleged that LinkedIn violated user privacy by using personal messages to train AI models. It claimed that the company introduced privacy settings in August and September 2024 that automatically opted users into sharing their personal data with third parties 1. The plaintiff sought $1,000 per user for the alleged violations.
However, LinkedIn provided evidence refuting these claims. Eli Wade-Scott, managing partner at Edelson PC, representing De La Torre, stated, "LinkedIn has shown us evidence that it did not use their private messages to do that" 123.
Sarah Wight, a lawyer and vice president for LinkedIn, confirmed in a LinkedIn post that the company did not disclose customers' private messages for AI training, stating emphatically, "We never did that" 23.
This case highlights the growing concerns surrounding data privacy in the rapidly evolving AI industry. Over the past two years, social media platforms have become valuable sources of training data for AI models, particularly for developing conversational abilities and understanding real-time events 1.
The dismissal comes amid intense competition in the AI industry, exemplified by the recent launch of the Chinese AI model, DeepSeek. This development caused significant market fluctuations and highlighted the industry's volatility, especially in the United States 1.
Many companies have informed users that their content may be used for AI training, with some offering opt-out options. However, platforms like Meta (Instagram and Facebook) and Reddit do not provide clear opt-out choices 1.
While the lawsuit's dismissal clears LinkedIn of wrongdoing, it underscores the importance of transparency in data usage policies. As AI development continues to accelerate, cases like this play a crucial role in maintaining oversight of user data control and privacy in the digital age.
Reference
[2]
[3]
LinkedIn is embroiled in a class-action lawsuit accusing the platform of using private messages from Premium users to train AI models without consent, raising concerns about data privacy and ethical AI development practices.
18 Sources
18 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
8 Sources
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
3 Sources
3 Sources
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
19 Sources
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved