Curated by THEOUTPOST
On Thu, 19 Sept, 12:05 AM UTC
19 Sources
[1]
LinkedIn is using your data to train its AI. Here's how to opt out
(NEXSTAR) -- LinkedIn confirmed that it is using personal user data to train its artificial intelligence models after being accused of opting members in without properly notifying them. The Microsoft-owned company announced in a blog post on Wednesday that it recently updated its privacy policy to clarify how it uses personal data to train its AI-powered tools, which can generate writing suggestions and post recommendations. When members use the professional networking platform, it collects data on their activity, such as their posts, language preferences, login frequency, and any feedback they may provide. LinkedIn said it is using this information to "fine-tune" its AI products and those belonging to "its affiliates." Beyond Microsoft, the other affiliates are unclear. Forbes reported that LinkedIn automatically opted users into training these AI models, while the independent tech publication 404 Media claimed this occurred before the company updated its terms of service agreement. Nexstar has reached out to LinkedIn for comment. Meanwhile, LinkedIn spokesman Greg Snapper told USA Today that "we've always been clear in our terms of service" and emphasized members have options regarding the use of their data. Users can easily turn off the AI tool on their mobile devices and desktop. Just go to Settings, click "Data Privacy," then select "Data for Generative AI Improvement." From there, toggle the feature off. "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place," the company explained on its website. LinkedIn said in its Wednesday blog post that new updates to its user agreement -- regarding AI features, content moderation practices, and more -- will go into effect on Nov. 20.
[2]
LinkedIn sneaks user data for AI training, later offers opt-out option for users
LinkedIn has recently made headlines for quietly opting its users into a new privacy setting that allows the platform to utilise account data for training generative AI models. According to a report by 404Media, this change came about without direct user consent. Following the introduction of this privacy setting, LinkedIn updated its privacy policy to clarify that data from the platform may be used to enhance and develop various products and services, including generative AI. The revised policy explicitly states, "We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences." This announcement aligns with LinkedIn's goal of using generative AI to power features such as writing assistance. For users who prefer not to have their data utilized in this manner, LinkedIn has provided an option to opt-out. Users can head to the Data Privacy tab in their account settings and find the toggle for "Data for Generative AI Improvement." By switching this toggle to "off," users can prevent LinkedIn and its affiliates from using their personal information for future AI model training. However, it is important to note that opting out does not retroactively affect any training that has already occurred. LinkedIn says- "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place." LinkedIn has reassured users that it employs "privacy enhancing technologies" to redact or remove personal data from its training datasets. Moreover, the company has indicated that it does not train its models using data from individuals residing in the EU, EEA, or Switzerland.
[3]
LinkedIn Has Been Using Your Data To Train Its AI System: Here's How You Can Stop It - News18
LinkedIn has used data of its users and did it without even taking their consent. After getting found out, the platform is now changing its policies. LinkedIn is a platform that helps network professionally and make connections that can even get you a job someday. However, the platform has now been allegedly scraping data of its users, without their consent to train its generative AI model to help them create content. Yes, that's right, LinkedIn has without permission sought your data to train its AI models, and the data is even available to its affiliates who can use your personal data for that purpose. These worrying details have been sourced through a 404Media report this week, and after independently verified these claims, we can confirm that LinkedIn has enabled the feature and people don't even know about it. In fact, it seems the platform has enabled other options by itself in the 'how LinkedIn uses your data' section that you can find in the settings. LinkedIn isn't the first company to use data of millions of its users and the worst part is that the platform has done it without informing them. Even our account had the data scraping option enabled which LinkedIn says is for Generative AI improvement. When you open the tab, you will notice a toggle button in green (like the image below) which means your data has already been scraped and used for AI training. So, what is the platform doing with this data and which AI features are utilising it? LinkedIn's help section (also hidden under learn more), clearly says that tools like AI-powered writing assistants are some of the features that are being developed with your data. The funny thing is that, LinkedIn's help page calls this section, "Control whether LinkedIn uses your data to train generative AI models that are used for content creation on LinkedIn." We don't really see where the control is available when they have enabled it by default. What's gone is gone but thankfully, you have the option to disable this data scraping feature and we tell you how: - Turn off the option Use my data for training content creation AI models Now, LinkedIn will not use your data for training its AI models. As we mentioned, LinkedIn isn't first and won't be the last company to practice this unethical process. Meta has already been accused of similar actions for Instagram and Facebook users in select regions and we might see more brands coming out as culprits.
[4]
LinkedIn Is Using Your Data to Train AI (but You Can Stop It)
Generative AI models aren't born out of a vacuum. In a sense, these systems are built piece by piece using massive amounts of training data, and always need more and more information to keep improving. As the AI race heats up, companies are doing whatever they can to feed their models more data -- and many are using our data to do so, sometimes without asking for our explicit permission first. LinkedIn is the latest apparent perpetrator of this practice: It seems everyone's "favorite" career-focused social media platform has been using our data to train their AI models without asking for permission or disclosing the practice first. Joseph Cox from 404Media initially reported the story, but you don't need to be a journalist to investigate it for yourself. This Tweet is currently unavailable. It might be loading or has been removed. Just head to LinkedIn, click your profile and go to Settings & Privacy. Here, you'll notice an interesting field: Data for Generative AI Improvement. This setting asks, "Can LinkedIn and its affiliates use your personal data and content you create on LinkedIn to train generative AI models that create content?" Oh, what's this? It's set to On by default? Thanks for asking, LinkedIn. If you click the Learn More link, you'll see LinkedIn's explanation for what it's doing with your data. When enabled, your profile data and the content of your posts can be used to train or "fine-tune" the generative AI models of both LinkedIn and its affiliates. Who are these affiliates? Well, LinkedIn says some of their models are provided by Microsoft's Azure OpenAI, but they don't elaborate beyond that, as far as I can tell. The company makes a point in this explanation to say that only the AI models trained for generating content, such as its AI-powered writing assistant, use your data, rather than AI models responsible for personalizing LinkedIn per user, or those used for security. It also says it aims to "minimize" the personal data used for training sets, including by using "privacy enhancing technologies" to obscure or remove personal data from these databases, but doesn't say how it does so or to what degree. That said, they offer a form for opting-out of using your data for "non-content generating GAI models." So, which is it, LinkedIn? Interestingly, Adobe had the opposite approach when users complained about its policy of accessing users' work to train AI models: They were adamant user data wasn't used for generative AI models, but for other types of AI models. Either way, these companies don't seem to get that people would prefer their data to be omitted from all AI training sets -- especially when they weren't asked about it in the first place. LinkedIn says it keeps your data as long as you do: If you delete your data from LinkedIn, either by deleting a post or through LinkedIn's data access tool, the company will delete it from their end, and thus stop using it for training AI. The company also clarifies it does not use the data of users in the EU, EEA, or Switzerland. In my view, this practice ridiculous. I think it's unconscionable to opt your users into training AI models with their data without presenting them the option first, before even updating the terms of service. I don't care that weak data privacy laws allow companies like LinkedIn to store everything we post or upload on their platforms: If you want to use someone's post to make your writing bot better, ask them first. I've reached out to LinkedIn, specifically asking about some of the inconsistencies in its policy, and to learn how long this process has been going on for. For context, the support articles I've referenced in this piece were updated seven days ago at the time of writing. To continue using LinkedIn without handing over your data for training its AI models, head back to Settings & Privacy > Data for Generative AI Improvement. Here, you can click the toggle to Off to opt-out. You can also use this form to "object to or request the restriction of" processing your data for "non-content generating GAI models." This is not retroactive: Any training that has already occurred cannot be undone, so LinkedIn won't be removing the effects of training from your data from its models. When you opt-out, your data can still be used for processing AI, but only when you interact with the AI models: LinkedIn says it can use your inputs to process your request and include any of the data in that input in the AI's output, but that's just how AI models work. If LinkedIn couldn't access this data after you opt-out, the model would largely be useless.
[5]
LinkedIn is using your data to train generative AI models. Here's how to opt out.
LinkedIn user data is being used to train artificial intelligence models, leading some social media users to call out the company for opting members in without consent. The professional networking platform said on its website that when users log on, data is collected for details such as their posts and articles, how frequently they use LinkedIn, language preferences and any feedback users have sent to the company. The data is used to "improve or develop the LinkedIn services," LinkedIn said. Some have taken issue with the feature, particularly the decision to auto-enroll users into it. "LinkedIn is now using everyone's content to train their AI tool -- they just auto opted everyone in," wrote X user and Women In Security and Privacy Chair Rachel Tobac. "I recommend opting out now (AND that orgs put an end to auto opt-in, it's not cool)." In a series of tweets, Tobac argued that social media users "shouldn't have to take a bunch of steps to undo a choice that a company made for all of us" and encouraged members to demand that organizations give them the option to choose whether they opt in to programs beforehand. Others chimed in with similar sentiments. Are remote workers working all day?No. Here's what they're doing instead. LinkedIn began notifying users about AI training this week LinkedIn said on its website this week that it is updating its user agreement and changes will go into effect on Nov. 20. The company said it has clarified practices covered by its privacy policy and added a new opt out setting for training AI models. The post also included a video featuring LinkedIn's Chief Privacy Officer Kalinda Raina. In the video, Raina said personal data is used so LinkedIn and its affiliates can "improve both security and our products in the generative AI space and beyond." A spokesperson for LinkedIn confirmed to USA TODAY Thursday afternoon that the company started notifying users about data being used to train generative-AI this week. "The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity," said LinkedIn spokesman Greg Snapper. "At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist." He stressed that users have choices when it comes to how their data is used and the company has always been up-front about it. "We've always been clear in our terms of service," he said. "Gen-AI is the newest phase of how companies everywhere are using AI."He also said LinkedIn has always used some form of automation in its products. To turn off the feature via the LinkedIn app, do the following: Examples of data LinkedIn may use to train AI models include articles that users post. If a user posts an article about advice they've received from mentors while also naming those mentors, LinkedIn's generative writing suggestions feature may include those names. The user can then edit or revise the post before publishing, the company said on its website. Users who try the profile writing suggestions feature, the AI model will use data from their profiles to generate the text. How to request your personal data According to LinkedIn's website, opting out prevents LinkedIn and its affiliates from using personal data and content to train models in the future but it doesn't undo or impact training that has already taken place. "We are initially making this setting available to members whose profile location is outside of the EU, EEA, or Switzerland," the company said on its website. "If you live in these regions, we and our affiliates will not use your personal data or content on LinkedIn to train or fine-tune generative AI models for content creation without further notice." The company said it uses privacy-enhancing technology to redact or remove personal data from the datasets it uses to train AI. LinkedIn said that for members who use the generative AI powered feature to create content, any information they provide and information generated by their prompts will be stored until the member deletes the data. To see what data LinkedIn has stored on them, users can take the following steps: Members can also delete data LinkedIn has stored or LinkedIn activity by filling out a deletion form.
[6]
LinkedIn Is Quietly Training AI on Your Data -- Here's How to Stop It
About a week ago, LinkedIn quietly updated a post that reveals it's using your data to train its AI models. But many LinkedIn users may not be aware that their data is being swiped for AI training in the first place. LinkedIn and "its affiliates" are using your profile page's data, posts, and other LinkedIn content to train AI models, including the ones LinkedIn uses to power its various AI features. LinkedIn does not specify who exactly its "affiliates" are, but LinkedIn is owned by Microsoft, which has close financial ties to OpenAI. At time of writing, LinkedIn's Pages Terms, User Agreement, Privacy Policy, and Copyright Policy do not contain the words "AI" or "artificial intelligence" in any capacity. But LinkedIn's terms state: "You and LinkedIn agree that we may access, store, process and use any information and personal data that you provide." LinkedIn's own policies also prohibit any user-instigated "software, devices, scripts, robots" or crawlers from trawling its site. It also bars its own users from selling or otherwise monetizing any data published on the work-focused social media platform. But LinkedIn announced Wednesday that it's rolling out changes to these policies, specifically, its User Agreement and Privacy Policy, to include a disclosure on its use of your data for AI. LinkedIn SVP and General Counsel Blake Lawit writes: "We have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation." Notably, EU users (or those with VPNs that make it look like they're based in the EU) get more protections from AI training on LinkedIn than those elsewhere. Lawit says EU users, unlike the rest of LinkedIn, are automatically opted out. So LinkedIn won't be scraping and training AI on EU or Switzerland-based user data "until further notice." UK- and US-based LinkedIn users noticed the site's AI training toggle popped up this week, with some arguing that Microsoft should pay LinkedIn users for scraping their data. "Turn this off!" exclaimed VectorField founder and CEO Ido Banai in a post warning LinkedIn users about the toggle. "In the age of AI every time you add data into a platform and it's used for [machine learning] training you should get paid, it's a no-brainer!" If you don't want Microsoft, OpenAI, or LinkedIn using your LinkedIn data and posts going forward, you can disable the setting by navigating to Settings > Data Privacy > Data for Generative AI Improvement.
[7]
LinkedIn is training AI models on your data
If you're on LinkedIn, then you should know that the social network has, without asking, opted accounts into training generative AI models. 404Media reports that LinkedIn introduced the new privacy setting and opt-out form before rolling out an updated privacy policy saying that data from the platform is being used to train AI models. As TechCrunch notes, it has since updated the policy. We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others. LinkedIn writes on a help page that it uses generative AI for purposes like writing assistant features. You can revoke permission by heading to the Data privacy tab in your account settings and clicking on "Data for Generative AI Improvement" to find the toggle. Turn it to "off" to opt-out. According to LinkedIn: "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place." The FAQ posted for its AI training says it uses "privacy enhancing technologies to redact or remove personal data" from its training sets, and that it doesn't train its models on those who live in the EU, EEA, or Switzerland. That setting is for data used to train generative AI models, but LinkedIn has other machine learning tools at work for things like personalization and moderation that don't generate content. To opt your data out of being used to train those, you'll have to also fill out the LinkedIn Data Processing Objection Form. LinkedIn's apparent silent opt-in of all, or at least most, of its platform's users comes only days after Meta admitted to having scraped non-private user data for model training going as far back as 2007.
[8]
LinkedIn is scraping your data to train AI -- here's how to opt-out
AI is gracing the presence of social media in new ways every day. To stay relevant, LinkedIn has jumped on the bandwagon, too. The Microsoft-owned company, used predominantly by networking professionals, has introduced several AI-driven features. These features, mostly available with LinkedIn Premium, include a job-seeker chatbot, AI-powered writing assistance for profiles, and tools for recruiters, including automated messaging and AI-generated job descriptions. While users are accustomed to LinkedIn making site enhancements to improve their networking endeavors, it's safe to say that most of us were not expecting to be left in the dark regarding the company's data practices. The current backlash stems from LinkedIn's Senior VP and general counsel, Blake Lawit, providing a "trust and safety" update, explaining that LinkedIn takes the data from users' profiles and posts to improve its AI tools. Unfortunately, this data collection was happening automatically -- without explicit user consent. You can read it for yourself here. Despite LinkedIn's efforts to improve its platform with AI, including features that many users enjoy, the automatic data collection has left users feeling blindsided. The platform, with roughly 830 million members, had not been upfront about how deeply user data was being integrated into AI training models, and this has triggered a wave of concern around privacy and data security. On the bright side, LinkedIn members in the European Union can rest easy. LinkedIn has not been collecting data from EU users for the purpose of AI training, thanks to stringent data privacy laws, particularly the General Data Protection Regulation (GDPR). In fact, LinkedIn has stated that it has no plans to collect such data from EU users in the foreseeable future. LinkedIn's use of user data for AI training is part of a larger trend in the tech industry, where data from millions of users is used to power AI advancements. As AI grows and evolves, more companies will likely employ similar tactics. This raises important questions about transparency, consent, and data privacy. Now seems like as good of a time as ever to read the small print and keep a close eye on what you're sharing on social media platforms. Although there's no taking back the data that has already been shared, opting out of LinkedIn's data collection for AI training in the future is relatively simple. If you're concerned about how your data is used, I recommend taking the following steps to prevent LinkedIn from further using your information to train its AI models.
[9]
How to stop LinkedIn from training AI on your data
LinkedIn limits opt-outs to future training, warns AI models may spout personal data. LinkedIn admitted Wednesday that it has been training AI on many users' data without seeking consent. Now there's no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training. In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn's user agreement and privacy policy will be changed to better explain how users' personal data powers AI on the platform. Under the new privacy policy, LinkedIn now informs users that "we may use your personal data... [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others." An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time. That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities. LinkedIn's AI models powering generative AI features "may be trained by LinkedIn or another provider," such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said. A potentially major privacy risk for users, LinkedIn's FAQ noted, is that users who "provide personal data as an input to a generative AI powered feature" could end up seeing their "personal data being provided as an output." LinkedIn claims that it "seeks to minimize personal data in the data sets used to train the models," relying on "privacy enhancing technologies to redact or remove personal data from the training dataset." While Lawit's blog avoids clarifying if data already collected can be removed from AI training data sets, the FAQ affirmed that users who automatically opted in to sharing personal data for AI training can only opt out of the invasive data collection "going forward." Opting out "does not affect training that has already taken place," the FAQ said. A LinkedIn spokesperson told Ars that it "benefits all members" to be opted in to AI training "by default." "People can choose to opt out, but they come to LinkedIn to be found for jobs and networking and generative AI is part of how we are helping professionals with that change," LinkedIn's spokesperson said. By allowing opt-outs of future AI training, LinkedIn's spokesperson additionally claimed that the platform is giving "people using LinkedIn even more choice and control when it comes to how we use data to train our generative AI technology." How to opt out of AI training on LinkedIn Users can opt out of AI training by navigating to the "Data privacy" section in their account settings, then turning off the option allowing collection of "data for generative AI improvement" that LinkedIn otherwise automatically turns on for most users. The only exception is for users in the European Economic Area or Switzerland, who are protected by stricter privacy laws that either require consent from platforms to collect personal data or for platforms to justify the data collection as a legitimate interest. Those users will not see an option to opt out, because they were never opted in, LinkedIn repeatedly confirmed. Additionally, users can "object to the use of their personal data for training" generative AI models not used to generate LinkedIn content -- such as models used for personalization or content moderation purposes, The Verge noted -- by submitting the LinkedIn Data Processing Objection Form. Last year, LinkedIn shared AI principles, promising to take "meaningful steps to reduce the potential risks of AI." One risk that the updated user agreement specified is that using LinkedIn's generative features to help populate a profile or generate suggestions when writing a post could generate content that "might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes." Users are advised that they are responsible for avoiding sharing misleading information or otherwise spreading AI-generated content that may violate LinkedIn's community guidelines. And users are additionally warned to be cautious when relying on any information shared on the platform. "Like all content and other information on our Services, regardless of whether it's labeled as created by 'AI,' be sure to carefully review before relying on it," LinkedIn's user agreement says. Back in 2023, LinkedIn claimed that it would always "seek to explain in clear and simple ways how our use of AI impacts people," because users' "understanding of AI starts with transparency." But for users shocked to find that LinkedIn only notified users of AI training after their data was collected, LinkedIn's commitment to transparency may ring hollow. On Reddit, some users joked about messing with LinkedIn's AI models, threatening to feed them "so much bullshit they'll not know up from down." Others seemed to be more seriously considering deleting their accounts or finding alternative networking platforms because of the breach of trust.
[10]
LinkedIn is using your data to train AI - here's how to turn it off
LinkedIn users listen up. Did you know the social networking site is using your data to train AI, and it's on by default? This morning, my LinkedIn account was swamped with users highlighting the website's "Data for Generative AI Improvement" setting, which seems to automatically opt-in users to "Use my data for training content creation off AI models." Certainly sounds a bit ominous, and in a world where training AI models is a grey area, most of us don't want to give up our data to make these models smarter. LinkedIn says it uses this personal data and your created content on the website to help train AI models that create content, yep... Sign me out. Fellow tech journalist, Prakhar Khanna brought this setting to my attention, sharing on LinkedIn, writing "I received no notification about my data being used to train AI. As far as I know, this toggle was recently added and slyly turned on without my knowledge." I decided to look into myself, and rightly so, my data has been training AI without notifying me that this was happening. In an effort to spread awareness, here's how to turn off LinkedIn's "Data for Generative AI Improvement" feature. It's pretty easy to turn off LinkedIn's AI training setting, but without knowing of its existence you'd never even realize you'd signed up for it. Luckily, this isn't like South Park's HumancentiPad episode, where signing up to iTunes gives Apple full control over your life. Instead, you just need to find the toggle and switch it off. Here's how: That's all there is to it, you've now rid yourself of LinkedIn's AI training capabilities. While you're safe from your LinkedIn content being used to train AI for now, this does raise a bigger concern about more of these settings being introduced in the future. So just remember: It's well worth remembering to check the fine print next time you read a privacy policy for any AI-related tool, like one of the best AI image generators, for example.
[11]
LinkedIn is training AI with your data. Here's how to opt out ASAP
As an objective journalistic observer of the tech and business world, I'd like to state the inarguable, quantifiable, unavoidable fact that LinkedIn sucks. It sucks really hard. LinkedIn is a terrible fusion of the worst parts of social networks, job boards, and office culture -- and it's about to suck even harder with the help of AI. Like seemingly every large tech company these days, LinkedIn is injecting generative AI into its platform. (You may have already spotted it on prompts that "help" users write posts or messages.) But the Microsoft-owned website is now scraping its user data to train its artificial intelligence systems. Naturally, you're opted into sharing your data with LinkedIn's AI for free, without any kind of message or alert. (Unless you're in the EU, where this kind of sneaky behavior is illegal.) In fact, LinkedIn started using your data before updating its often-intentionally-nebulous Privacy Policy, as spotted by 404 Media. The policy has subsequently been updated to include legalese: "We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others." The data that LinkedIn has already collected is part of the training model, and there's nothing you can do about that, according to The Verge.
[12]
LinkedIn is using your data to train AI. Here's how to turn it off.
LinkedIn has been training generative AI with user data -- a quiet change the public noticed on Wednesday. Users of the professional social networking platform owned by Microsoft were the first to notice a new option pop up in their data privacy settings called "Data for Generative AI Improvement." The setting options come with an explanation saying that this feature gives "LinkedIn and its affiliates" permission to "use your personal data and content you create on LinkedIn to train generative AI models that create content." The setting is turned on by default. In addition, as 404 Media discovered in its initial report, LinkedIn appears to have launched its AI training without updating its terms of service to inform users. Don't want LinkedIn and other third-parties using your LinkedIn data to train their generative AI to create content using your posts? Here's how to turn it off. Thanks to the EU's strong data privacy laws, LinkedIn is not using EU users' data for its AI training. "We are initially making this setting available to members whose profile location is outside of the EU, EEA, or Switzerland," reads a statement posted on the company's AI training FAQ page. "If you live in these regions, we and our affiliates will not use your personal data or content on LinkedIn to train or fine-tune generative AI models for content creation without further notice."
[13]
LinkedIn uses your personal data to train AI but who doesn't?
LinkedIn has quietly opted its users into training generative AI models without explicitly asking for consent, raising concerns about data privacy on the platform. According to a report by 404Media, LinkedIn made changes to its privacy policy, stating that user data can be used to train AI models. The platform has since updated the policy, now allowing users to opt-out of this practice. Updated LinkedIn policy reveals personal data usage on AI training The updated policy states that LinkedIn may use personal data to "improve, develop, and provide products and Services," as well as to train AI models. Generative AI is used for features like writing assistants, but LinkedIn claims it employs privacy-enhancing technologies to redact personal information. Users who prefer not to participate can opt-out by navigating to the "Data privacy" tab in their account settings, turning off the "Data for Generative AI Improvement" toggle. However, opting out will only stop LinkedIn from using your data for future model training. Data that has already been used remains unaffected. Additionally, LinkedIn clarifies that users in the EU, EEA, or Switzerland are not included in AI model training. If you're concerned about other machine learning tools used for personalization and moderation, LinkedIn requires users to fill out a "Data Processing Objection Form" to opt-out from those uses as well. LinkedIn's silent opt-in move echoes similar actions from Meta, which recently admitted to scraping non-private user data for AI training dating back to 2007. The timing of LinkedIn's move comes at a moment when other major tech players, like OpenAI, are also facing backlash for similar practices. This pattern of quietly enrolling users in AI training without clear and prominent notifications creates a sense of unease. It's not just about data being used for AI -- it's about who gets to decide and how informed that decision is. The tech industry has long faced criticism for operating in the shadows when it comes to data collection, and the growing push for generative AI is only amplifying those concerns. Can machines forget your personal data? Another key issue is that opting out only affects future use of personal data. Any data that has already been fed into AI models remains in the system, and that lack of retroactive control may leave many users feeling powerless. The industry is also talking about "machine unlearning" to prevent this from happening, deleting data fed in AI models. The fact that LinkedIn uses "privacy-enhancing technologies" to anonymize data is somewhat reassuring, but it doesn't address the deeper problem: the need for more proactive, user-centered privacy standards. Ultimately, this situation highlights the need for stronger, clearer regulations that put control back in the hands of users. The idea that tech companies can use our personal data without clear consent doesn't sit well in times where privacy is becoming increasingly valuable.
[14]
LinkedIn is taking user data to train its AI models
The company is facing a backlash for automatically opting users into its data collection plan, as it seeks to enhance its AI models. Microsoft-owned LinkedIn is the latest company attempting to boost its AI models with user data, though most of Europe is being avoided. The company has updated its terms of service and confirmed it will use the data of its members to train generative AI models - or models used for "content generation". These updates will come into effect on 20 November 2024. LinkedIn's updated privacy policy states that the platform may use your data to develop and train AI models, as well as "gain insights with the help of AI, automated systems and inferences, so that our services can be more relevant and useful to you and others." The company specified that it AI training says it uses "privacy enhancing technologies to redact or remove personal data" from its training sets. LinkedIn also has an opt out option for its users, though this is not on by default. The move has caused some backlash among users, with various people turning to social media to share their issues with the decision - particularly around the quiet update of LinkedIn's policies and the fact users are opted in by default. But LinkedIn has chosen not to collect the data of an users in the European Economic Area of Switzerland. Other companies have faced legal pressure from the EU when attempting to take user data in the bloc. In June, Meta paused plans to train its large language models using public content shared by adults on Facebook and Instagram, following intensive discussion with the Irish Data Protection Commission (DPC). Earlier this month, the DPC concluded legal proceedings it brought against X over the usage of EU citizen data to train its AI chatbot, Grok. X agreed to suspend its processing of the personal data of the social media platform's EU and EEA users on a permanent basis. X first suspended this processing last month shortly after the DPC first launched the legal action against the company. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[15]
LinkedIn users say their data is being collected for generative AI training without permission
LinkedIn users in countries including the U.S. and India have raised complaints about the employment-centric social media platform using their data for training AI models, without obtaining consent first. Users pointed to a section in their LinkedIn settings, under the menu heading 'Data for Generative AI Improvement.' Upon opening this section, a setting that gave LinkedIn and its affiliates permission to use an individual's personal data and content to train AI models, was turned on by default. The AI models in question would be used for content creation, per LinkedIn. Users have the option of manually turning it off, thus opting out of letting LinkedIn use their content and data for AI training. Screenshot of the new generative AI training option on LinkedIn that is turned on by default | Photo Credit: LinkedIn in India The rise in AI licensing deals imperils personal data LinkedIn users took to other social media platforms to express their unease about the new setting that was activated without their consent. Others raised plagiarism concerns. It was not yet clear whether EU privacy laws will stop LinkedIn from harvesting the data of its customers in the region for AI training. Microsoft acquired LinkedIn for $26.2 billion in 2016, and is also facing legal action over its alleged use of people's data for AI training. Published - September 19, 2024 09:48 am IST Read Comments
[16]
LinkedIn scraping user content for its AI without asking
You'll have to opt out if you don't like it - EU and a few others excepted LinkedIn started harvesting user-generated content to train its AI without asking for permission, angering netizens. Microsoft's self-help network on Wednesday published a "trust and safety" update in which senior veep and general counsel Blake Lawit revealed LinkedIn's use of people's posts and other data for both training and using its generative AI features. In doing so, he said the site's privacy policy had been updated. We note this policy links to an FAQ that was updated sometime last week also confirming the automatic collecting of posts for training - meaning it appears LinkedIn started gathering up content for its AI models, and opting in users, well before Lawit's post and the updated privacy policy advised of the changes today. The FAQ says the site's built-in generative AI features may use your personal info to do things like automatically suggest stuff to write if and when you ask it to; and that your data will be used to train the models behind those features, which you'll have to opt out of if you don't like it. We're also told that using LinkedIn means the outfit will "collect and use (or process) data about your use of the platform, including personal data ... your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams." There's some good news for users in the EU, Iceland, Norway, Liechtenstein (both of them!) and Switzerland as their data isn't being used to train LinkedIn's AI at all and won't for the foreseeable future. The document also states that LinkedIn seeks "to minimize personal data in the datasets used to train the models, including by using privacy enhancing technologies to redact or remove personal data from the training dataset." But the FAQ also contains the following warning that the system may provide someone else's info if asked in a certain way: The Microsoft social media outfit also last week emitted an article titled: "Control whether LinkedIn uses your data to train generative AI models that are used for content creation on LinkedIn." That text explains how it's possible to opt out of AI scraping, and points to setting called Data for Generative AI Improvement that offers a single button marked: "Use my data for training content creation AI models." That button is in the "On" position until users move it to "Off." Big Tech has mostly used a "scrape-first, settle the lawsuits for a pittance later" approach to finding the content it needs to build AI models. LinkedIn could not have been ignorant of the backlash to that approach, making its approach curious. User anger cannot therefore be surprising. On LinkedIn it's not hard to find the service's move described as a breach of trust, along with a rash of posts advising users how to turn off the scraping. Which thankfully isn't hard to do: Click on your LinkedIn Profile, select "Settings", then "Data Privacy" and then look for an item labelled "Data for Generative AI improvement." Click the single button there to opt out, then go back to wading through the rest of LinkedIn. ®
[17]
LinkedIn is already training its AI on user data, says it will update ToS soon
According to recent reports by 404 Media, some LinkedIn users have noticed a new setting revealing that the social networking platform is using their data to train generative AI models without prior agreeance. LinkedIn asserts that the data usage aims to enhance features like writing assistance, and while users can disable the feature under the 'Data for Generative AI Improvement' tab in their account settings, it appears to be on by default. "Privacy enhancing techniques" are said to be used to anonymize data and safeguard personally identifiable information, however many will be unhappy that sensitive company information may be used automatically. Citizens of the EU, EEA and Switzerland are also exempt from this data use due to stricter regulations and privacy rules. Despite enabling user data processing, LinkedIn's terms of service had not been updated to reflect the change. The Microsoft-owned company told 404 Media that it would update its terms "shortly." TechRadar Pro has asked LinkedIn to confirm details of how it uses user data to train its AI and to share an official response to its delayed terms of service update. We did not receive an immediate response. However, user data processing doesn't stop there. LinkedIn also uses other machine learning tools for functions like personalization and content moderation. These are also on an opt-out basis, with a separate Data Processing Objection Form required. While LinkedIn has now updated its terms of service, the late disclosure and clarification have raised privacy and transparency concerns, with users seemingly unhappy about the implementation of a significant change without proper notice.
[18]
LinkedIn trains generative AI models on user data, offers opt-out option
LinkedIn has reportedly enrolled accounts in training its generative AI models prior to updating its Terms of Service. According to 404Media, LinkedIn is utilising user data to enhance its generative AI products but has not yet updated its terms to reflect this change. The company has indicated that it will update its terms shortly. LinkedIn has introduced a new privacy setting and opt-out form before releasing an updated privacy policy, which states that data from the platform is being used to train AI models. According to TechCrunch, LinkedIn has since updated the policy. On its help page, LinkedIn states that generative AI is employed for features such as writing assistance.
[19]
Microsoft-owned LinkedIn is training AI model on user data: What the company said and how to opt out - Times of India
Click on "Data for Generative AI Improvement" to find the toggle. The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
Share
Share
Copy Link
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
LinkedIn, the professional networking platform owned by Microsoft, has been utilizing user data to train its artificial intelligence (AI) systems, a practice that has recently come to light. This revelation has raised concerns about data privacy and user consent in the rapidly evolving landscape of AI technology 1.
The company has been employing a wide range of user-generated content for AI training purposes. This includes profile information, posts, comments, and even private messages exchanged on the platform. LinkedIn's AI initiatives aim to enhance various features such as job recommendations, content curation, and search functionality 2.
The discovery of LinkedIn's data practices has sparked debates about user privacy and the extent to which tech companies should be allowed to utilize personal information for AI development. Critics argue that the lack of explicit consent from users regarding the use of their data for AI training purposes raises ethical questions 3.
In response to the growing concerns, LinkedIn has introduced an opt-out feature for users who wish to exclude their data from AI training. This move aims to provide users with more control over their personal information. The opt-out process involves navigating through the platform's settings, allowing users to make an informed choice about their data usage 4.
LinkedIn's case is not isolated, as it reflects a broader trend in the tech industry where companies are increasingly leveraging user data for AI development. This situation highlights the need for clearer regulations and industry standards regarding data usage for AI training, balancing innovation with user privacy rights 5.
The introduction of the opt-out feature represents a step towards greater user empowerment. However, it also raises questions about whether opt-out should be the default setting, rather than requiring users to actively exclude themselves from data collection practices. This debate extends beyond LinkedIn, touching on fundamental issues of data ownership and control in the digital age.
Reference
[2]
The Financial Express
|LinkedIn sneaks user data for AI training, later offers opt-out option for users[3]
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
3 Sources
LinkedIn updates its User Agreement, making users accountable for sharing AI-generated content that violates platform policies, raising questions about AI reliability and user responsibility.
4 Sources
As AI chatbots become more prevalent, concerns about data privacy grow. Learn how major tech companies are offering opt-out options for users who don't want their conversations used in AI training.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved