Curated by THEOUTPOST
On Fri, 20 Sept, 4:04 PM UTC
8 Sources
[1]
LinkedIn has stopped grabbing U.K. users' data for AI
The U.K.'s data protection watchdog has confirmed that Microsoft-owned LinkedIn has stopped processing user data for AI model training for now. Steven Almond, executive director of regulatory risk for the Information Commissioner's Office, wrote in a statement on Friday: "We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users. We welcome LinkedIn's confirmation that it has suspended such model training pending further engagement with the ICO." Eagle-eyed privacy experts had already spotted a quiet edit LinkedIn made to its privacy policy after a backlash over grabbing people's info to train AIs -- adding the U.K. to the list of European regions where it does not offer an opt-out, as it says it is not processing local users' data for this purpose. "At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice," LinkedIn general counsel Blake Lawit, wrote in an updated company blog post originally published on September 18. The professional social network had previously specified it was not processing information of users located in the European Union, EEA or Switzerland -- where the bloc's General Data Protection Regulation (GDPR) applies. However U.K. data protection law is still based on the EU framework, so when it emerged that LinkedIn was not extending the same courtesy to U.K. users, privacy experts were quick to shout foul. U.K. digital rights non-profit the Open Rights Group (ORG), channelled its outrage at LinkedIn's action into a fresh complaint to the ICO about consentless data processing for AI. But it was also critical of the regulator for failing to stop yet another AI data heist. In recent weeks, Meta, the owner of Facebook and Instagram, lifted an earlier pause on processing its own local users' data for training its AIs and returned to default harvesting U.K. users' info. That means users with accounts linked to the U.K. must once again actively opt out if they don't want Meta using their personal data to enrich its algorithms. Despite the ICO previously raising concerns about Meta's practices, the regulator has so far stood by and watched the ad tech giant resume this data harvesting. In a statement put out on Wednesday, ORG's legal and policy officer, Mariano delli Santi, warned about the imbalance of letting powerful platforms get away with doing what they like with people's information so long as they bury an opt-out somewhere in settings. Instead, he argued, they should be required to obtain affirmative consent up front. "The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI," he wrote. "Opt-in consent isn't only legally mandated, but a common-sense requirement." We've reached out to the ICO and Microsoft with questions and will update this report if we get a response.
[2]
LinkedIn Halts AI Data Processing in UK Amid Privacy Concerns Raised by ICO
The U.K. Information Commissioner's Office (ICO) has confirmed that professional social networking platform LinkedIn has suspended processing users' data in the country to train its artificial intelligence (AI) models. "We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users," Stephen Almond, executive director of regulatory risk, said. "We welcome LinkedIn's confirmation that it has suspended such model training pending further engagement with the ICO." Almond also said the ICO intends to closely keep an eye on companies that offer generative AI capabilities, including Microsoft and LinkedIn, to ensure that they have adequate safeguards in place and take steps to protect the information rights of U.K. users. The development comes after the Microsoft-owned company admitted to training its own AI on users' data without seeking their explicit consent as part of an updated privacy policy that went into effect on September 18, 2024, 404 Media reported. "At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice," Linked said. The company also noted in a separate FAQ that it seeks to "minimize personal data in the data sets used to train the models, including by using privacy enhancing technologies to redact or remove personal data from the training dataset." Users who reside outside Europe can opt out of the practice by heading to the "Data privacy" section in account settings and turning off the "Data for Generative AI Improvement" setting. "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place," LinkedIn noted. LinkedIn's decision to quietly opt in all users for training its AI models comes only days after Meta acknowledged that it has scraped non-private user data for similar purposes going as far back as 2007. The social media company has since resumed training on U.K. users' data. Last August, Zoom abandoned its plans to use customer content for AI model training after concerns were raised over how that data could be used in response to changes in the app's terms of service. The latest development underscores the growing scrutiny of AI, specifically surrounding how individuals' data and content could be used to train large AI language models. It also comes as the U.S. Federal Trade Commission (FTC) published a report that essentially said large social media and video streaming platforms have engaged in vast surveillance of users with lax privacy controls and inadequate safeguards for kids and teens. The users' personal information is then often combined with data gleaned from artificial intelligence, tracking pixels, and third-party data brokers to create more complete consumer profiles before being monetized by selling it to other willing buyers. "The companies collected and could indefinitely retain troves of data, including information from data brokers, and about both users and non-users of their platforms," the FTC said, adding their data collection, minimization, and retention practices were "woefully inadequate." "Many companies engaged in broad data sharing that raises serious concerns regarding the adequacy of the companies' data handling controls and oversight. Some companies did not delete all user data in response to user deletion requests."
[3]
LinkedIn suspends use of UK data for AI after watchdog questions
LinkedIn said it welcomes the chance to engage with the ICO further. "We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users," said the ICO's executive director, Stephen Almond. Many big tech firms, including LinkedIn, are looking to user-generated content on their platforms as a fresh source of data for training AI tools. "Generative" AI tools, such as chatbots like OpenAI's ChatGPT or image generators like Midjourney, learn from huge volumes of text and image data. But a LinkedIn spokesperson told BBC News that the company believes users should have control over their data. As such, it has given UK users a way to opt out of having their data used to train its AI models. "We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used," they added. Social platforms where users post about their lives, or jobs, can provide rich material to help tools sound more natural. "The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume... to help craft messages to recruiters to get that next career opportunity," LinkedIn's spokesperson said. "At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist." The company says in its global privacy policy that user data will be used to help develop its AI services, and in a help article it states that it will also be processed when users interact with tools that offer post writing suggestions, for example. This will now not apply to users in the UK, alongside those in the European Union (EU), European Economic Area and Switzerland. Meta and X (formerly known as Twitter) are among platforms that, like LinkedIn, want to use content posted on their platforms to help develop their generative AI tools. But they have faced regulatory hurdles in the UK and EU, with strict privacy rules placing limits on how and when personal data can be collected. Meta halted its plans to use UK adults' public posts, comments and images to train its AI tools in June following criticism, and concerns raised by the ICO. The company recently began re-notifying UK users of Facebook and Instagram about its plans and clarified its process for opting-out after engaging with the data watchdog. LinkedIn will now likely face a similar process before it can resume plans to train its tools with UK users' data. "In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset," said the ICO's Mr Almond. He said the regulator would "continue to monitor" developers such as Microsoft and LinkedIn to ensure they are protecting UK users' data rights.
[4]
The LinkedIn AI saga shows us the need for EU-like privacy regulations
If you are on LinkedIn, you might have come across users complaining about the platform using their data to train a generative AI tool without their consent. People began noticing this change in the settings on Wednesday, September 18, when the Microsoft-owned social media platform started training its AI on user data before updating its terms and conditions. LinkedIn certainly isn't the first social media platform to begin scraping user data to feed an AI tool without asking for consent beforehand. What's curious about the LinkedIn AI saga is the decision to exclude the EU, EEA (Iceland, Liechtenstein, and Norway), and Switzerland. Is this a sign that only EU-like privacy laws can fully protect our privacy? Before LinkedIn, both Meta (the parent company behind Facebook, Instagram, and WhatsApp) and X (formerly known as Twitter) started to use their users' data to train their newly launched AI models. While these social media giants initially extended the plan also to European countries, they had to halt their AI training after encountering strong backlash from EU privacy institutions. Let's go in order. The first to test out the waters were Facebook and Instagram back in June. According to their new privacy policy - which came into force on June 26, 2024 - the company can now use years of personal posts, private images, or online tracking data to train its Meta AI. After Austria's digital rights advocacy group Noyb filed 11 privacy complaints to various Data Protection Authorities (DPAs) in Europe, the Irish DPA requested that the company pause its plans to use EU/EEA users' data. Meta was said to be disappointed about the decision, dubbing it a "step backward for European innovation" in AI, and decided to cancel the launch of Meta AI in Europe, not wanting to offer "a second-rate experience." Something similar occurred at the end of July when X automatically enabled the training of its Grok AI on all its users' public information - European accounts included. Just a few days after the launch, on August 5, consumer organizations filed a formal privacy complaint with the Irish Data Protection Commission (DPC) lamenting how X's AI tool violated GDPR rules. The Irish Court has now dropped the privacy case against X as the platform agreed to permanently halt the collection of EU users' personal data to train its AI model. While tech companies have often criticized the EU's strong regulatory approach toward AI - a group of organizations even recently signed an open letter asking for better regulatory certainty on AI to foster innovation - privacy experts have welcomed the proactive approach. The message is strong - Europe isn't willing to sacrifice its strong privacy framework. Despite LinkedIn having now updated its terms of service, the silent move attracted strong criticism around privacy and transparency outside Europe. It's you, in fact, who must actively opt-out if you don't want your information and posts to be used to train the new AI tool. As mentioned earlier, both X and Meta used similar tactics when feeding their own AI models with users' personal information, photos, videos, and public posts. Nonetheless, according to some experts, the fact that other companies in the industry act without transparency doesn't make it right to do so. "We shouldn't have to take a bunch of steps to undo a choice that a company made for all of us," tweeted Rachel Tobac, ethical hacker and CEO of SocialProof Security. "Organizations think they can get away with auto opt-in because 'everyone does it'. If we come together and demand that organizations allow us to CHOOSE to opt-in, things will hopefully change one day." As explained in the LinkedIn FAQs (which, at the time of writing, were updated one week ago): "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place." In other words, the data already scraped cannot be recovered, but you can still prevent the social media giant from using more of your content in the future.
[5]
LinkedIn to use personal data of users for training AI models
LinkedIn is using account holders' data to train its AI models, as per the company's updated privacy policy, which came into effect on September 18. The policy mentions that the company is also relying on user data to develop, provide, and personalise its services with AI. LinkedIn says that it also uses people's interactions with its generative AI features for training purposes. It further mentions that whenever it relies on user data for training purposes, it seeks to minimise the personal information in said data. To do so, it relies on privacy-enhancing technologies to redact or remove personal data. The EU, the UK, the European Economic Area, and Switzerland are the only regions whose data LinkedIn is currently not using for training purposes. Users can opt out of having their personal data and the content they create used for training generative AI. However, till they do so, their data is still fair game. Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train AI models going forward, but it wouldn't affect models that they have already created using your data. The company holds people's data for training purposes till they delete it. Users can access information about what data LinkedIn has used for training through its data access tool. The primary concern with this approach is the fact that it doesn't give users a chance to give informed consent, and users are not explicitly made aware that their data will be used for AI training purposes. The advocacy group None of Your Business (NYOB) flagged this exact concern in the context of Meta, which also follows a silent opt-in approach to training its AI models. Earlier in June, NYOB filed complaints against Meta in multiple countries in the EU for using people's data since 2007 to train AI models. This data includes things like posts or photos and their captions. Another concern is that by only allowing users to opt-out, LinkedIn and Meta are shifting the burden of action on their users. They are assuming consent by default, and so if a user doesn't actively watch out for updates in the companies' privacy policies, they wouldn't know that the companies are using their data to train AI models. Further, while the tech giants are telling users what data they are using for model training, they don't tell users what specific AI models their data contributes to and how these models are applied. This again indicates a lack of transparency in the approach to using people's data for training purposes. While India's Digital Personal Data Protection Act (DPDPA, 2023) doesn't specifically mention AI, it allows companies to process publicly available personal data without any consent or without adhering to the provisions of the act. This means that companies don't necessarily have to give users the option to opt-out before using their data for training AI models. As such, Meta, which also uses Indian users' data for training its models, does not provide an opt out option to users in the country.
[6]
LinkedIn is training AI with your personal data. Here's how to stop it
Your information and how you interact with LinkedIn is helping to train AI. If you don't want that to happen, you can opt out and check what it already knows. Artificial intelligence (AI) models are only as good as the data that train them, and if you use LinkedIn, your data is a part of that training. Fortunately, there's a way out. LinkedIn said today that it's updating its privacy policy to clarify how it uses your information. "We may use your personal data to improve, develop, and provide products and services, develop and train artificial intelligence (AI) models," the updated verbiage says, and to "develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others." Also: The NSA advises you to turn off your phone once a week - here's why This data includes using generative AI (gen AI) on the site or other AI features (powered by Microsoft's Azure OpenAI service), anything you post, articles you share, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to the LinkedIn team. The most interesting part is that LinkedIn has updated its privacy policy to spell out what it's doing, but it was already using your data before this new agreement. LinkedIn has this setting turned on by default. However, LinkedIn did add that when it trains AI models, it tries to minimize personal data use and uses privacy-enhancing technologies to redact certain information from the training dataset. Also: FTC report exposes massive data collection by social media brands - how to protect yourself If you don't want your data to train AI, LinkedIn has included an opt-out. To exclude your data from training AI, use the Data for Generative AI Improvement member setting where you'll see a single toggle. When you opt-out, LinkedIn says neither it nor its affiliates will use your information to train models going forward. This setting doesn't affect training that has already taken place. When you engage with a gen AI-powered feature on LinkedIn, the company says, it stores any data you provide until you delete it. If you're curious [about] what information LinkedIn has about you, including connections, contacts, account history, and information LinkedIn infers about you based on your profile and activity, you can go to the settings page, then "Data privacy" and choose "Get a copy of your data."
[7]
LinkedIn is using your data to train its AI. Here's how to opt out
(NEXSTAR) -- LinkedIn confirmed that it is using personal user data to train its artificial intelligence models after being accused of opting members in without properly notifying them. The Microsoft-owned company announced in a blog post on Wednesday that it recently updated its privacy policy to clarify how it uses personal data to train its AI-powered tools, which can generate writing suggestions and post recommendations. When members use the professional networking platform, it collects data on their activity, such as their posts, language preferences, login frequency, and any feedback they may provide. LinkedIn said it is using this information to "fine-tune" its AI products and those belonging to "its affiliates." Beyond Microsoft, the other affiliates are unclear. Forbes reported that LinkedIn automatically opted users into training these AI models, while the independent tech publication 404 Media claimed this occurred before the company updated its terms of service agreement. Nexstar has reached out to LinkedIn for comment. Meanwhile, LinkedIn spokesman Greg Snapper told USA Today that "we've always been clear in our terms of service" and emphasized members have options regarding the use of their data. Users can easily turn off the AI tool on their mobile devices and desktop. Just go to Settings, click "Data Privacy," then select "Data for Generative AI Improvement." From there, toggle the feature off. "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place," the company explained on its website. LinkedIn said in its Wednesday blog post that new updates to its user agreement -- regarding AI features, content moderation practices, and more -- will go into effect on Nov. 20.
[8]
LinkedIn has (quietly) started scraping your posts for AI
Gift 5 articles to anyone you choose each month when you subscribe. LinkedIn has quietly updated its user agreement and privacy policies revealing that the popular employment-focused social media platform has started collecting user data to train artificial intelligence models. By default, all users outside the European Union and Switzerland, including all Australian users, have had AI data sharing turned on. Any user who does not want their data shared needs to change their settings to switch off the option. In a statement published on the company's official blog on Wednesday (Thursday AEST), general counsel Blake Lawit wrote: "Today we're updating our User Agreement and clarifying some practices covered by our Privacy Policy ... [W]e have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation ('generative AI') and through security and safety measures." LinkedIn is a wholly owned subsidiary of Microsoft, which runs its own proprietary AI under the brand Copilot and is a financial backer of ChatGPT parent OpenAI. Microsoft uses OpenAI technology to power its Azure products. LinkedIn and Microsoft were contacted for comment about the change. LinkedIn's Help function was also updated to clarify that "artificial intelligence models that LinkedIn uses to power generative AI features may be trained by LinkedIn or another provider. For example, some of our models are provided by Microsoft's Azure OpenAI service".
Share
Share
Copy Link
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
LinkedIn, the professional networking platform owned by Microsoft, has announced a halt to its data collection practices for AI training in the United Kingdom. This decision comes in response to growing scrutiny from UK regulators and mounting concerns over user privacy 1.
The UK's Information Commissioner's Office (ICO) had been investigating LinkedIn's data practices, particularly focusing on the company's approach to user consent for AI training. LinkedIn's previous policy allowed for the collection and use of user data for AI model training without explicit user permission, raising significant privacy concerns 2.
While the data collection pause is currently limited to the UK, it has sparked a global conversation about data privacy and user rights in the age of AI. The move underscores the growing importance of transparent AI practices and the need for clear user consent mechanisms in tech companies worldwide 3.
The LinkedIn AI saga has highlighted the need for stronger privacy regulations, similar to those implemented in the European Union. The EU's General Data Protection Regulation (GDPR) has set a gold standard for data protection, and many experts argue that similar comprehensive regulations are necessary globally to protect user privacy in the face of advancing AI technologies 4.
Despite the setback in the UK, LinkedIn has expressed its intention to continue training AI models using user data in other regions. The company claims that this data utilization will enhance user experiences and improve platform features. However, the "silent opt-in" approach has raised concerns among privacy advocates and users alike 5.
In response to the controversy, LinkedIn has stated that it will provide users with more control over their data. The company is working on implementing clearer opt-out mechanisms and improving transparency regarding how user data is utilized for AI training. These steps are seen as crucial for maintaining user trust and complying with evolving data protection regulations 1.
The LinkedIn case serves as a wake-up call for the tech industry, highlighting the delicate balance between innovation and user privacy. As AI technologies continue to advance, companies are under increasing pressure to ensure that their data practices are ethical, transparent, and compliant with regional regulations 3.
Reference
[1]
[2]
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
4 Sources
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
19 Sources
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
3 Sources
3 Sources
LinkedIn is embroiled in a class-action lawsuit accusing the platform of using private messages from Premium users to train AI models without consent, raising concerns about data privacy and ethical AI development practices.
18 Sources
18 Sources
A class action lawsuit against LinkedIn, alleging misuse of user data for AI training, has been dismissed after the company provided evidence refuting the claims. The case highlights growing concerns over data privacy in AI development.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved