Curated by THEOUTPOST
On Tue, 24 Sept, 12:05 AM UTC
4 Sources
[1]
LinkedIn's 930 Million Users Unknowingly Train AI, Sparking Data Privacy Concerns | PYMNTS.com
LinkedIn has thrust its 930 million users into an unexpected role: unwitting AI trainers, igniting a firestorm over data privacy and consumer trust. The professional networking giant's recent User Agreement and Privacy Policy update, which will take effect on Nov. 20, has caused concern in the business community. LinkedIn admitted it has been using users' data to train its AI without consent, and while users can opt out of future training, there's no way to undo past data use. This revelation has warned experts of growing tension between AI innovation and user privacy. "Data is the new oil. When the data being sifted through contains personal information, that's where privacy questions come into play," David McInerney, commercial manager for data privacy at Cassie, told PYMNTS. LinkedIn's move could force businesses to reconsider their digital footprint, balancing the need for professional connectivity against the risk of compromising sensitive information. McInerney emphasized the stakes: "A whopping 93% [of consumers] are concerned about the security of their personal information online." While LinkedIn offers an opt-out setting for generative AI training, the company noted that it will not use data from users in the European Economic Area, Switzerland and the United Kingdom for AI training. This geographic distinction highlights the disparity between European data protection standards and the less regulated U.S. landscape. As LinkedIn's parent company, Microsoft, navigates this controversy, McInerney pointed out a fundamental challenge: "Businesses like Microsoft can say they trained their AI, and it made an automated decision. But a fundamental piece of GDPR is your right to challenge an automated decision." This principle, he noted, becomes problematic when "nobody at a company knows how the algorithms work because they've become so complicated." The debate underscores a broader trend in the tech industry, where companies are racing to leverage AI capabilities while grappling with ethical considerations and user trust. "Compliance is good -- ethics are better," McInerney said. "By prioritizing your customers, it's proven to create stronger relationships, increased brand loyalty and higher sales." Concerns over privacy in AI training data have grown as AI systems become more powerful and widespread. Central to this issue is how AI models, especially with large language models like OpenAI's GPT-4 or Google's Gemini, are trained on vast amounts of publicly available information scraped from the internet, including websites, social media and databases, often without explicit consent. In a recent lawsuit, authors like George R.R. Martin and Sarah Silverman filed complaints against OpenAI and Meta, claiming that their copyrighted works were used to train AI models without permission. This raised alarms about how AI companies collect and use personal and proprietary data. The central argument is that AI companies have scraped this data en masse, sidestepping intellectual property rights and individual privacy. Controversy erupted when Clearview AI, a facial recognition startup, was discovered to have been scraping billions of images from social media platforms to train its AI system without users' knowledge. Privacy advocates expressed concern that such practices could lead to violations of personal privacy, particularly when sensitive information is used to profile or track individuals. The European Union's AI Act specifically addresses these concerns by regulating high-risk AI applications and requiring transparency in data usage. This regulatory framework may be a harbinger of more stringent laws as lawmakers recognize the need to protect personal data from being used without consent in AI. As the Nov. 20 deadline approaches, businesses and individual users alike are left to ponder the implications of their professional data potentially fueling AI systems and whether the benefits of enhanced services outweigh the privacy concerns in an increasingly AI-driven world.
[2]
LinkedIn is training AI on you -- unless you opt out with this setting
You might have used LinkedIn to hunt for a new job, or keep in touch with colleagues from the early days of your career. But LinkedIn has been using you, too. Last week, the professional network added a new data privacy setting that caught many by surprise. By default, it granted itself permission to use information shared on the service to train its artificial intelligence. Unless you toggle this new setting to off, LinkedIn considers everything fair game -- your posts, articles, even your videos. To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select "Data privacy," and turn off the option under "Data for generative AI improvement." Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren't retroactive. LinkedIn says it has already begun training its AI models with user content, and that there's no way to undo it. Spokesman Greg Snapper said LinkedIn uses people's data to train AI to "help people all over the world create economic opportunity" by fleshing out tools to help them find new jobs and learn new skills. "If we get this right, we can help a lot of people at scale," he said. LinkedIn, then, would clearly love if its AI features landed you a job where you were fairly valued for the quality of your work. But it's hard not to think of it the other way, too: Is LinkedIn fairly valuing the work you've contributed to improving its AI? Work that you were not directly compensated for, or may not have been told they were using? For some, that will seem like a fair trade-off. Others are unsettled by how LinkedIn handled the situation. "Hard to find opt-out tools are almost never an effective way to allow users to exercise their privacy rights," said F. Mario Trujillo, a staff attorney at the Electronic Frontier Foundation. "If companies really want to give users a choice, they should present users with a clear 'yes' or 'no' consent choice." LinkedIn isn't alone in turning public user data into AI training material. Your chats with OpenAI's ChatGPT and Google's Gemini are used to improve those chatbots' performance over time, and similarly require you to opt out rather than in. And during a recent hearing in the Australian Parliament, Meta's director of privacy policy, Melinda Claybaugh, confirmed that the company had been scraping public photos and text on Facebook and Instagram to train its AI models for years longer than expected. LinkedIn, which is owned by Microsoft, says it has been notifying users about its AI data policy through emails, text messages and banners on its website. But it still caught many users off-guard -- and the move appears to give users less time to respond than even its parent company has offered. In August, Microsoft announced that it would begin training its AI Copilot tool based on the interactions people had with it, along with data collected from usage of its Bing search engine and its Microsoft Start news feed. But unlike LinkedIn, Microsoft said it would inform consumers of the option to opt out of data collection in October, and would only begin training its AI models 15 days after the option is made available. Why didn't LinkedIn do a similarly informed rollout? Snapper wouldn't say whether it had been considered. "As a company, we're really just focused on 'How can we do this better next time?'," he said.
[3]
LinkedIn is training AI on you -- unless you opt out with this setting
The professional network now by default grants itself permission to use anything you post to train its artificial intelligence You might have used LinkedIn to hunt for a new job, or keep in touch with colleagues from the early days of your career. But LinkedIn has been using you, too. Last week, the professional network added a new data privacy setting that caught many by surprise. By default, it granted itself permission to use information shared on the service to train its artificial intelligence. Unless you toggle this new setting to off, LinkedIn considers everything fair game -- your posts, articles, even your videos. To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select "Data privacy," and turn off the option under "Data for generative AI improvement." Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren't retroactive. LinkedIn says it has already begun training its AI models with user content, and that there's no way to undo it. Spokesman Greg Snapper said LinkedIn uses people's data to train AI to "help people all over the world create economic opportunity" by fleshing out tools to help them find new jobs and learn new skills. "If we get this right, we can help a lot of people at scale," he said. LinkedIn, then, would clearly love if its AI features landed you a job where you were fairly valued for the quality of your work. But it's hard not to think of it the other way, too: Is LinkedIn fairly valuing the work you've contributed to improving its AI? Work that you were not directly compensated for, or may not have been told they were using? For some, that will seem like a fair trade-off. Others are unsettled by how LinkedIn handled the situation. "Hard to find opt-out tools are almost never an effective way to allow users to exercise their privacy rights," said F. Mario Trujillo, a staff attorney at the Electronic Frontier Foundation. "If companies really want to give users a choice, they should present users with a clear 'yes' or 'no' consent choice." LinkedIn isn't alone in turning public user data into AI training material. Your chats with OpenAI's ChatGPT and Google's Gemini are used to improve those chatbots' performance over time, and similarly require you to opt out rather than in. And during a recent hearing in the Australian Parliament, Meta's director of privacy policy, Melinda Claybaugh, confirmed that the company had been scraping public photos and text on Facebook and Instagram to train its AI models for years longer than expected. LinkedIn, which is owned by Microsoft, says it has been notifying users about its AI data policy through emails, text messages and banners on its website. But it still caught many users off-guard -- and the move appears to give users less time to respond than even its parent company has offered. In August, Microsoft announced that it would begin training its AI Copilot tool based on the interactions people had with it, along with data collected from usage of its Bing search engine and its Microsoft Start news feed. But unlike LinkedIn, Microsoft said it would inform consumers of the option to opt out of data collection in October, and would only begin training its AI models 15 days after the option is made available. Why didn't LinkedIn do a similarly informed rollout? Snapper wouldn't say whether it had been considered. "As a company, we're really just focused on 'How can we do this better next time?'," he said.
[4]
How to stop LinkedIn, Facebook, and Instagram from hoovering up your data to train their AI
You don't have to let social media services harvest your data to train their generative AI, as many companies do these days. You can opt out -- if you can figure out how. The problem was hammered home last week when tech news site 404Media reported that LinkedIn had started training its AI on its users' posts by default, without letting users know about the change. LinkedIn is relying on the data to train AI that will help users write their posts or to recommend content to them. A social media platform hoovering up posts and personal information for their AI system isn't entirely new. Meta has been harvesting Facebook and Instagram user data since last year, while X, formerly Twitter, has done the same since July. TikTok, whose data policies are under scrutiny amid a possible U.S. ban, hasn't clearly stated whether it harvests user data for any generative AI tools. Social media companies have long been criticized for how they collect, use, and share user data. In a report on Thursday, the Federal Trade Commission called out social media services for their "vast surveillance" of users and "woefully inadequate" data controls. LinkedIn opted hundreds of millions of its users outside of the EU and the UK into its AI training push without specifically telling users about it. The company told 404Media on Wednesday that it will add new language to its terms of service "shortly." Earlier this month, LinkedIn made a broader update to its user policy, saying that when training AI, it will "seek to minimize personal data in the data sets used to train the models" by redacting or removing personal information such as suggested posts or messages, according to a LinkedIn explanation of the policy. LinkedIn said in a statement to Fortune that it doesn't harvest direct messages, or messages sent privately between users. LinkedIn Spokesperson Greg Snapper told Fortune that the company believes users should have control over their data. "We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used," he said. Opting out of LinkedIn's data harvesting for AI requires visiting the service's "Settings & Privacy" pages. They are accessible by clicking on your headshot in the upper right on a desktop computer, and then clicking on "Account." You must then click on "Data privacy" on the left sidebar, which brings up several options including "Data for Generative AI improvement." To opt out, you toggle the switch next to the line that reads "Use my data for training content creation AI models." LinkedIn has turned on this option by default. It means your personal data and LinkedIn content is used to train "content creation AI models" -- and not just for LinkedIn, but its "affiliates" as well. LinkedIn is owned by Microsoft, which has also partnered on AI with OpenAI through Microsoft's multi-billion dollar investment in the maker of ChatGPT. Some of LinkedIn's models are provided by Microsoft's Azure OpenAI service, according to LinkedIn's FAQ. But opting out of AI training does nothing to reverse the company's collection of your data prior, and LinkedIn did not specify when it started harvesting user data to train AI. "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place," according to LinkedIn's FAQ page. If the multi-step process is too much, Cassie Kozyrkov, former chief decision scientist at Google, shared a one-click opt-out link with her followers that lets users avoid having to hunt for the appropriate page. In her post, Kozyrkov, who LinkedIn has designated a "top voice" with more than 600,000 followers, criticized LinkedIn for its data harvesting, saying it could "clone your posts without crediting you." As for other social media platforms, opting out of data harvesting involves a similarly complicated process to LinkedIn's. To opt out of X using your data to train its AI chatbot Grok, go to "Settings" on a desktop computer and select "Privacy and safety." Then, under "Data sharing and Personalization," select Grok. Here you can uncheck the box that permits data sharing. You can also select an option to delete your conversation history with Grok. On this page, X explains that your posts, as well as interactions, prompts, and results with Grok are fair game for training and fine-tuning its AI model, and that they may be shared with Elon Musk's xAI, which provides the chatbot to X. The chatbot is only available to paid users but is trained on any public posts. It's a trickier situation for Meta, which just last week acknowledged that it has used every public Facebook and Instagram post from non-EU users since 2007 to train its AI. Since June, Meta has also been scraping interactions with chatbots on Facebook, Instagram, Threads, and WhatsApp to train AI. Currently, if you live in a country without a data privacy law, the only way to opt out of Meta using your posts for its AI training is to set your account to private. This does not protect against data harvesting of any public posts you appear in, or your own previous posts that were already scraped. TikTok did not respond to Fortune's requests for comment about whether it builds generative AI models with user data. This summer, TikTok launched a genAI advertising tool for brands to create videos, and TikTok's corporate owner, the Chinese tech company ByteDance, has launched various AI apps for users outside of China. Under "Account and privacy settings," TikTok says that it collects and analyzes face and voice information "to improve safety and user experience, to recommend and moderate content, and for analytics and demographic classification." To stop your face and voice from being used, TikTok tells users not to upload any photos or videos of yourself and delete any that you've already posted.
Share
Share
Copy Link
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
LinkedIn, the professional networking platform boasting 930 million users, has been utilizing its vast trove of user data to train artificial intelligence (AI) models, a practice that has recently come under scrutiny 1. This revelation has ignited a fierce debate about data privacy and the ethical implications of using personal information for AI development without explicit user consent.
The Microsoft-owned platform has been leveraging a wide range of user-generated content, including posts, comments, and even private messages, to enhance its AI capabilities 2. This practice extends to improving search functions, content recommendations, and potentially developing new AI-powered features for the platform.
In response to growing concerns, LinkedIn has introduced an opt-out setting for users who wish to exclude their data from AI training 3. However, critics argue that this option is not prominently displayed and many users remain unaware of its existence. The process to opt-out requires navigating through several menu options, raising questions about the transparency of LinkedIn's data practices.
LinkedIn's approach to AI training is not unique in the tech industry. Other social media giants like Facebook, Instagram, and TikTok have also been scrutinized for similar practices 4. This trend highlights a broader issue in the digital landscape, where user data has become a valuable asset for companies developing AI technologies.
The use of user data for AI training has raised legal questions, particularly in regions with strict data protection laws like the European Union's General Data Protection Regulation (GDPR). Privacy advocates argue that companies should obtain explicit consent before using personal data for purposes beyond the primary function of their services 1.
As the debate continues, there is a growing call for increased transparency and user control over personal data. Experts suggest that platforms like LinkedIn should make opt-out options more accessible and provide clearer information about how user data is being utilized 2. This would empower users to make informed decisions about their digital footprint and participation in AI development.
The controversy surrounding LinkedIn's AI training practices underscores the complex relationship between technological advancement and individual privacy rights. As AI continues to evolve, striking a balance between innovation and data protection will remain a critical challenge for tech companies and policymakers alike 3.
Reference
[1]
[2]
[3]
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
19 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
8 Sources
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
3 Sources
3 Sources
LinkedIn is embroiled in a class-action lawsuit accusing the platform of using private messages from Premium users to train AI models without consent, raising concerns about data privacy and ethical AI development practices.
18 Sources
18 Sources
A class action lawsuit against LinkedIn, alleging misuse of user data for AI training, has been dismissed after the company provided evidence refuting the claims. The case highlights growing concerns over data privacy in AI development.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved