Curated by THEOUTPOST
On Thu, 19 Sept, 12:05 AM UTC
3 Sources
[1]
LinkedIn's Privacy Policy: Is Your Data Fueling AI?
People have voiced their dissent for years, saying LinkedIn was selling user data in the truest sense, the company has turned this into reality now. The microblogging and networking site acquired by Microsoft has now updated its privacy policy to allow using users' data to train AI models. The new policy will come into effect from Monday. This would be a huge change in the way user data would be dealt with on the website. Specifically, the network under Microsoft now added explicit details to the Privacy Policy relating to using personal data to create AI-based services and sharing information with affiliates.
[2]
LinkedIn Is Training AI on User Data Before Updating Its Terms of Service
LinkedIn is using its users' data for improving the social network's generative AI products, but has not yet updated its terms of service to reflect this data processing, according to posts from various LinkedIn users and a statement from the company to 404 Media. Instead, the company says it will update its terms "shortly." The move is unusual in that LinkedIn appears to have gone ahead with training AI on its users' data, even creating a new option in its settings, without updating its terms of service, which is traditionally one of the main documents that can explain how users' data is collected or used.
[3]
LinkedIn scraped user data for training before updating its terms of service | TechCrunch
LinkedIn may have trained AI models on user data without updating its terms. LinkedIn users in the US -- but not the EU, EEA, or Switzerland, likely due to those regions' data privacy rules -- have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train "content creation AI models." The toggle isn't new. But, as first reported by 404 Media, LinkedIn initially didn't refresh its privacy policy to reflect the data use. The terms of service have now been updated, but ordinarily that occurs well before a big change like using user data for a new purpose like this. The idea is it gives users an option to make account changes or leave the platform if they don't like the changes. Not this time, it seems. So what models is LinkedIn training? Its own, the company says in a Q&A, including models for writing suggestions and post recommendations. But LinkedIn also says that generative AI models on its platform may be trained by "another provider," like its corporate parent Microsoft. "As with most features on LinkedIn, when you engage with our platform we collect and use (or process) data about your use of the platform, including personal data," the Q&A reads. "This could include your use of the generative AI (AI models used to create content) or other AI features, your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams. We use this data, consistent with our privacy policy, to improve or develop the LinkedIn services." LinkedIn previously told TechCrunch that it uses "privacy enhancing techniques, including redacting and removing information, to limit the personal information contained in datasets used for generative AI training." To opt out of LinkedIn's data scraping, head to the "Data Privacy" section of the LinkedIn settings menu on desktop, click "Data for Generative AI improvement," and then toggle off the "Use my data for training content creation AI models" option. You can also attempt to opt out more comprehensively via this form, but LinkedIn notes that any opt-out won't affect training that's already taken place. The nonprofit Open Rights Group (ORG) has called on the Information Commissioner's Office (ICO), the U.K.'s independent regulator for data protection rights, to investigate LinkedIn and other social networks that train on user data by default. Earlier this week, Meta announced that it was resuming plans to scrape user data for AI training after working with the ICO to make the opt-out process simpler. "LinkedIn is the latest social media company found to be processing our data without asking for consent," Mariano delli Santi, ORG's legal and policy officer, said in a statement. "The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn't only legally mandated, but a common-sense requirement." Ireland's Data Protection Commission (DPC), the supervisory authority responsible for monitoring compliance with the GDPR, the EU's overarching privacy framework, told TechCrunch that LinkedIn informed it last week that clarifications to its global privacy policy would be issued today. "LinkedIn advised us that the policy would include an opt-out setting for its members who did not want their data used for training content generating AI models," a spokesperson for the DPC said. "This opt-out is not available to EU/EEA members as LinkedIn is not currently using EU/EEA member data to train or fine tune these models." TechCrunch has reached out to LinkedIn for comment. We'll update this piece if we hear back. The demand for more data to train generative AI models has led a growing number of platforms to repurpose or otherwise reuse their vast troves of user-generated content. Some have even moved to monetize this content -- Tumblr owner Automattic, Photobucket, Reddit, and Stack Overflow are among the networks licensing data to AI model developers. Not all of them have made it easy to opt out. When Stack Overflow announced that it would begin licensing content, several users deleted their posts in protest -- only to see those posts restored and their accounts suspended.
Share
Share
Copy Link
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
LinkedIn, the professional networking platform owned by Microsoft, has come under fire for its data handling practices, particularly concerning the use of user information for artificial intelligence (AI) training. The controversy stems from the company's decision to utilize user data for AI development before updating its terms of service to explicitly allow such usage 1.
Reports suggest that LinkedIn had been using member data to train AI models without obtaining explicit consent from its users. This practice reportedly began before the company updated its privacy policy and terms of service to include provisions for AI-related data usage 2. The revelation has raised concerns about the ethical implications of such actions and the potential violation of user trust.
In response to the growing controversy, LinkedIn has recently updated its privacy policy. The new policy now includes language that allows the company to use member data for AI model training. However, this update came after the company had already begun using the data for AI purposes, leading to questions about the retroactive nature of the consent 3.
The situation has sparked a broader debate about data privacy in the age of AI. Critics argue that LinkedIn's actions highlight the need for more transparent and ethical practices in handling user data, especially when it comes to emerging technologies like AI. There are concerns about the potential misuse of personal and professional information shared on the platform 1.
LinkedIn's case is not isolated, as other tech giants have faced similar scrutiny over their data practices and AI development. This incident has brought attention to the need for clearer regulations and guidelines governing the use of user data for AI training across the tech industry 2.
In light of these developments, there are calls for LinkedIn and other platforms to provide users with more control over their data. This includes the ability to opt out of AI training programs and greater transparency about how personal information is used. The incident has also reignited discussions about the importance of robust data protection laws and their enforcement in the digital age 3.
Reference
[1]
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
19 Sources
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
4 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
8 Sources
LinkedIn is embroiled in a class-action lawsuit accusing the platform of using private messages from Premium users to train AI models without consent, raising concerns about data privacy and ethical AI development practices.
18 Sources
18 Sources
A class action lawsuit against LinkedIn, alleging misuse of user data for AI training, has been dismissed after the company provided evidence refuting the claims. The case highlights growing concerns over data privacy in AI development.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved