Curated by THEOUTPOST
On Thu, 23 Jan, 12:08 AM UTC
18 Sources
[1]
LinkedIn hit with lawsuit alleging private messages were used to train AI models
LinkedIn slapped with a lawsuit for its AI training practices. Credit: Sheldon Cooper / SOPA Images / LightRocket / Getty Images LinkedIn is facing a class-action lawsuit over allegations of using private messages to train its AI model. The lawsuit, filed in the U.S. District Court in the Northern District of California, has accused the Microsoft-owned professional networking site of "unlawfully disclosing its Premium customers' private messages to third parties" and "concealing" its practices by "stealthily altering its privacy policies and statements." A key part of the lawsuit accused LinkedIn of disclosing private InMail messages to third parties to train its model. A spokesperson for LinkedIn said, "we are not using member messages to train models as alleged in the complaint." The issue of attaining training data for AI models is a contentious one, and LinkedIn is not the first company to be accused of misconduct. Google, Microsoft, and OpenAI have all faced lawsuits on behalf of users for using their personal data without prior knowledge or consent. The lawsuit against LinkedIn is on behalf of paying LinkedIn Premium users who ostensibly pay for enhanced privacy features. The allegations center on a privacy setting introduced in August 2024 that enabled LinkedIn users to opt out of sharing their personal data to train its AI models, but the opt-in setting was toggled on by default. A month later, LinkedIn updated its privacy policy to say the company can use user data to train its models and that data might be shared with third parties. The lawsuit accuses LinkedIn of violating data privacy laws and breach of contract by training on user data, including InMail messages without knowledge or consent and "cover[ing] its tracks" by retroactively changing its privacy policy. On behalf of LinkedIn Premium users, the lawsuit is seeking damages of $1,000 per plaintiff.
[2]
LinkedIn sued for sharing user data without consent to train AI models
The lawsuit focuses on LinkedIn Premium users who sent or received private InMail messages. LinkedIn, the professional social media platform owned by Microsoft, is facing a lawsuit filed by Premium customers. The customers allege that LinkedIn shared their private messages with third parties without their consent to train generative AI models. The proposed class-action lawsuit was filed on Tuesday, representing millions of LinkedIn Premium users. According to the lawsuit, LinkedIn introduced a privacy setting last August that allowed users to enable or disable the sharing of their personal data, reports Reuters. However, the plaintiffs claim that LinkedIn updated its privacy policy on September 18, 2023, to include a clause stating that user data could be used for training AI models. They also allege that a "frequently asked questions" section linked to the updated policy mentioned that opting out "does not affect training that has already taken place." Also read: LinkedIn may have used user data to train AI models without informing about it The complaint argues that this move shows LinkedIn was fully aware it violated customers' privacy and its promise to use personal data only to support and improve its platform. It also claims LinkedIn made these changes discreetly to avoid public backlash and legal consequences. The lawsuit focuses on LinkedIn Premium users who sent or received private InMail messages. It alleges their private information was shared with third parties for AI training purposes before the policy update in September. The plaintiffs are seeking damages for breach of contract, violations of California's unfair competition law, and $1,000 per person under the federal Stored Communications Act. Also read: LinkedIn's new AI features can write your cover letter, review resume & more LinkedIn has not yet responded to the allegations, but the lawsuit raises significant concerns about how personal data is used to develop AI technologies. It highlights the growing tension between user privacy and the demands of advanced AI training.
[3]
LinkedIn Sued for Alleged Misuse of User Data For AI Training
Microsoft-owned networking platform LinkedIn is facing a lawsuit in the US over allegedly sharing its users' private messages with other companies to train AI models, BBC reported. The lawsuit accuses LinkedIn of introducing a privacy setting in August 2024 that automatically opted people in to share their personal data with third parties for AI training purposes. Soon after, in September, the company updated its privacy policy to reflect that account holders' data could be used for AI training purposes. The report says that the lawsuit has sought $1,000 per user for LinkedIn's alleged violations of the US federal Stored Communications Act. The privacy policy explicitly states that users have the option to opt out of allowing their personal data and content to be used for training generative AI. However, until users actively choose to opt out, the company will continue using their data for training purposes. Further, opting out will prevent LinkedIn and its affiliates from using your future personal data and content to train AI models, but it won't retroactively remove data already used to develop existing AI models. Despite what the lawsuit claims, LinkedIn's privacy policy FAQ does not appear to mention that the company will use people's messages for training purposes. It says that the company will use people's interactions with its generative AI features, their posts and articles, and feedback they have provided the LinkedIn team for training purposes. The lawsuit comes at a time when AI companies seem to be running out of data sources. An MIT research study found that over 28% of the most critical and maintained sources in C4, a massive internet text dataset used by AI companies for model training, are now restricted from use. Further, a range of organisations now charge AI companies for access to their data. For instance, Reddit has entered into licensing agreements with Google and OpenAI to give them access to its data for training purposes. Similarly, OpenAI has also signed a deal with News Corp giving it access to the content of The Wall Street Journal, New York Post, and The Daily Telegraph. Given the necessity of data in developing models, AI companies who run social media platforms are turning to their own platforms for data sources as well. Just like LinkedIn, Meta also changed its privacy policy in June last year allowing it to access users' posts, likes, and comments for AI training.
[4]
LinkedIn sued for allegedly using private messages to train AI
LinkedIn's actions are especially egregious as Premium users pay the company for heightened privacy, the lawsuit claims. A legal complaint has been filed against LinkedIn that alleges the professional networking platform unlawfully disclosed private messages from its Premium subscribers to third parties in order to train generative artificial intelligence (AI) models. The plaintiff, a LinkedIn Premium user, filed the lawsuit in the Northern District Court of California earlier this week on behalf of himself and other paying users of the service, accusing the company of disclosing "incredibly sensitive and potentially life-altering information" regarding employment, compensation and other private communications to third-party "affiliates" within its owner Microsoft's corporate structure without their permission. Moreover, the lawsuit also claims that by using private discussions to train AI models, the company has "permanently embedded" customer data in its AI systems, exposing them to future unauthorised usage. Last year, LinkedIn updated its terms of service confirming that it will use member-data to train generative AI models. In its updated privacy policy, the company specified that for its AI training, it will use "privacy enhancing technologies to redact or remove personal data" from its training sets, and while it has an opt-out option for its users, this is turned off by default. Although, EU users remained unaffected by these changes. However, the lawsuit states that LinkedIn initially "unilaterally" disclosed its user data for AI training in August 2024, "discreetly" introducing an opt-out option after only news reports surfaced in mid-September, prompting "harsh" public backlash, and added that the company did not offer to delete the allegedly non-consensually acquired user data from existing AI models. The lawsuit claims that LinkedIn's alleged actions are especially serious since Premium members pay fees for their subscriptions, which include heightened privacy protections, and requests the court to order LinkedIn to delete all AI models trained using Premium users' private messages and pay $1,000 in damages per user affected by the company's alleged actions. Responding to a media request, a LinkedIn spokesperson told SiliconRepublic.com that the lawsuit contains "false claims with no merit". Last October, the Irish Data Protection Commission, concluding a nearly six-year long investigation into LinkedIn, fined the company €310m after finding that LinkedIn's data processing practices infringed on multiple articles of the EU General Data Protection Regulation (GDPR). Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[5]
LinkedIn facing lawsuit over accusations private messages used to train AI
LinkedIn made changes to FAQ's and privacy policy to "cover its tracks," lawsuit alleges LinkedIn has been accused of sharing private messages and user data with third parties for AI training in a US lawsuit. The Microsoft-owned job site has increasingly looked to position itself as a standard social media platform and has introduced a number of AI tools and features for LinkedIn Premium users. The lawsuit claims LinkedIn "quietly" introduced a new privacy setting in August 2024 which opted users into sharing their data with third parties for AI training purposes. The lawsuit goes on to state after this change to privacy settings, the company then updated its privacy policy to state that user information could be used for AI training, and the FAQ section was also changed to state users could choose to not have their data shared with third parties for AI training, but opting out would not affect data that had already been used for AI training. "This behaviour suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimise public scrutiny," the lawsuit alleges, which is seeking $1,000 per user for violations against the US federal Stored Communications Act, and an unspecified amount for California's unfair competition law and breach of contract. A spokesperson for LinkedIn addressed the allegations, stating, "these are false claims with no merit" (Via BBC).The changes to the LinkedIn privacy policy were not enacted for users in the UK, European Economic Area, and Switzerland, an email sent to users last year says. In 2024, LinkedIn settled a class action lawsuit against the platform for $6.625 million after being accused of overcharging advertisers by artificially inflating the number of views video adverts received between January 2015 and May 2023.
[6]
LinkedIn sued for allegedly training AI on private messages
Microsoft's IG-for-suits insists lawsuit's claims are without merit Microsoft's LinkedIn was this week accused of providing its third-party partners with access to Premium customers' private InMail messages for AI model training. A lawsuit [PDF], filed on behalf of Alessandro De La Torre in a California federal court, alleges InMail messages were fed to neural networks based on LinkedIn's disclosure last year. The Microsoft-owned goliath announced policy changes reflecting its use of member posts and personal data to train AI models and its provision of said data to third-parties for that purpose. LinkedIn exempted customers in Canada, the EU, EEA, UK, Switzerland, Hong Kong, or Mainland China from having their LinkedIn data used "to train content-generating AI models." Customers in the US, where there's still no federal privacy law, were offered a setting, enabled by default, titled "Data for Generative AI Improvement." LinkedIn explains, "This setting controls the training of generative AI models used to create content. When this setting is on LinkedIn and its affiliates may use your personal data and content you create on LinkedIn for that purpose." So LinkedIn acknowledges it will use "personal data and content you create" for AI training and will offer that data to third-parties for model training. The question raised by the lawsuit is whether LinkedIn has been including the contents of private InMail messages, available to paying subscribers, as part of the personal data being shared. The lawsuit claims, "LinkedIn breached its contractual promises by disclosing its Premium customers' private messages to third parties to train generative artificial intelligence ('AI') models. Given its role as a professional social media network, these communications include incredibly sensitive and potentially life-altering information about employment, intellectual property, compensation, and other personal matters." It focuses on Premium customers - those paying for Premium tier subscriptions (Premium Career, Premium Business, Sales Navigator, and Recruiter Lite) - because subscribers agreed to a separate contract, the LinkedIn Subscription Agreement (LSA), which makes specific privacy commitments not extended to non-paying LinkedIn members. "In Section 3.2 of the LSA, LinkedIn promises not to disclose its Premium customers' confidential information to third parties," the complaint notes, alleging a violation of the US Stored Communications Act, breach of contract, and unfair competition under California law. But the complaint offers no indication that the plaintiffs have any evidence of InMail contents being shared. Rather, the legal filing appears to assume InMail messages have been included in AI training data based on LinkedIn's alleged attempts to cover its tracks through a series of unannounced policy language changes and on the company's failure to publicly declare that it never accessed InMail contents for training. "[T]o date, LinkedIn has never publicly denied that it disclosed the contents of its Premium customers' InMail messages to third parties for the purpose of training generative AI models," the complaint says. The Register asked Edelson PC, the law firm representing the plaintiff, whether anyone there has reason to believe, or evidence, that LinkedIn has actually provided private InMail messages to third-parties for AI training? Though our inquiry was acknowledged, we've not heard back. LinkedIn denied the allegations. "These are false claims with no merit," a LinkedIn spokesperson said. ®
[7]
LinkedIn Sued for Sharing Customer Data to Train AI Models: Report
LinkedIn denies the claims, calling them false and lacking merit. LinkedIn is facing a proposed class action lawsuit from its Premium customers, who reportedly claim that the business-focused social media disclosed their private messages to third parties for AI training without consent. Filed in federal court in San Jose, California, the lawsuit alleges that LinkedIn quietly introduced a privacy setting in August 2024 allowing users to control sharing of their personal data, according to a report by Reuters. Also Read: New York Times Sues OpenAI and Microsoft in Copyright Clash Over AI Training: Report Customers said that LinkedIn then discreetly updated its privacy policy on September 18 to say that data could be used to train AI models. A "frequently asked questions" hyperlink also explained that opting out "does not affect training that has already taken place," the report said. The update indicated that personal data could be used to train AI models, and opting out would not affect past data usage. This attempt to "cover its tracks" suggests LinkedIn was fully aware it violated customers' privacy and its promise to use personal data only to support and improve its platform, in order to minimise public scrutiny and legal fallout, the complaint said, according to the report. Also Read: Dow Jones, New York Post Sue Perplexity AI for Copyright Infringement: Report The lawsuit seeks unspecified damages for breach of contract, unfair competition, and violations of the federal Stored Communications Act, with a potential USD 1,000 fine per person. LinkedIn, owned by Microsoft, denied the claims, calling them "false." LinkedIn reportedly said in a statement: "These are false claims with no merit." This legal action comes several hours after US President Donald Trump announced a joint venture involving Microsoft-backed OpenAI, Oracle, and SoftBank aimed at developing AI infrastructure in the United States.
[8]
LinkedIn Sued for Disclosing Customer Information to Train AI Models
Microsoft's LinkedIn has been sued by Premium customers who said the business-focused social media platform disclosed their private messages to third parties without permission to train generative Artificial Intelligence (AI) models. According to a proposed class action filed on Tuesday night on behalf of millions of LinkedIn Premium customers, LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data. Customers said LinkedIn then discreetly updated its privacy policy on Sept. 18 to say data could be used to train AI models, and in a "frequently asked questions" hyperlink said opting out "does not affect training that has already taken place." This attempt to "cover its tracks" suggests LinkedIn was fully aware it violated customers' privacy and its promise to use personal data only to support and improve its platform, in order to minimize public scrutiny and legal fallout, the complaint said. The lawsuit was filed in the San Jose, California, federal court on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was disclosed to third parties for AI training before September 18. It seeks unspecified damages for breach of contract and violations of California's unfair competition law, and $1,000 (roughly Rs. 86,492) per person for violations of the federal Stored Communications Act. LinkedIn said in a statement: "These are false claims with no merit." A lawyer for the plaintiffs had no immediate additional comment. The lawsuit was filed several hours after U.S. President Donald Trump announced a joint venture among Microsoft-backed OpenAI, Oracle and SoftBank, with a potential $500 billion (roughly Rs. 43,25,275 crore) of investment, to build AI infrastructure in the United States. The case is De La Torre v. LinkedIn Corp, U.S. District Court, Northern District of California, No. 25-00709. © Thomson Reuters 2025
[9]
Premium users sue LinkedIn for abusing their data to train AI models
A hot potato: LinkedIn users have recently filed a class action lawsuit against the Microsoft-owned business and employment social network. Premium subscribers claim the company violated their privacy by using personal data to train AI algorithms without their knowledge or permission. Millions of LinkedIn Premium customers intend to fight the social network in court. A lawsuit filed in a San Jose, California federal court states that users are seeking compensation after discovering that LinkedIn used their private personal messages for AI training purposes. The lawsuit states that LinkedIn tried to cover its tracks and was aware of the unlawful practice against its customers. The class action comes after LinkedIn quietly updated its privacy policy in September 2024, stating that it uses people's data to train machine learning algorithms. The lawsuit alleges that the social network began using personal data, including employment history, individual details, and private messages, even before announcing the policy change. "These are false claims with no merit," a LinkedIn spokesperson told BBC News. LinkedIn claims the alleged inappropriate data use did not involve customers in Europe, the UK, or Switzerland and changed the website's FAQs section to explain that users could opt out of data sharing. However, the new policy would not apply to AI training that had already taken place. The lawsuit says LinkedIn's behavior indicates a pattern of attempting to cover its tracks. The company knew it was violating the law and breaching the contractual promises of the Premium subscription and its privacy standards. Managers made a blatant attempt to minimize public scrutiny of the alleged data abuse. The plaintiffs seek $1,000 in compensation per user, accusing LinkedIn of violating the US Stored Communications Act. Furthermore, the lawsuit asks that the company pay an additional, unspecified amount for breach of contract and violation of California's unfair competition law. LinkedIn has shown no interest in settling the potentially damaging complaint, fully maintaining that the suit is frivolous. Like most tech companies, LinkedIn is pouring money into AI models and generative AI to boost profits and future business prospects. Ironically, LinkedIn has disavowed any inaccurate, misleading, or fake content its AI model produces, warning that such information is the user's responsibility.
[10]
LinkedIn accused of using private messages to train AI
A US lawsuit filed on behalf of LinkedIn Premium users accuses the social media platform of sharing their private messages with other companies to train artificial intelligence (AI) models. It alleges that in August last year, the world's largest professional social networking website "quietly" introduced a privacy setting, automatically opting users in to programme that allowed third parties to use their personal data to train AI. It also accuses the Microsoft-owned company of concealing its actions a month later by changing its privacy policy to say user information could be disclosed for AI training purposes. A LinkedIn spokesperson told BBC News that "these are false claims with no merit.
[11]
Microsoft's LinkedIn Sued Over Using DMs to Train AI
LinkedIn is now facing a lawsuit over using customer data and communications to train its AI models, according to a new proposed class-action filing submitted in California this week. A LinkedIn Premium subscriber is suing the Microsoft-owned networking platform because they believe LinkedIn has been "unlawfully disclosing" their private direct messages (DMs) to third parties for the purpose of training AI models. The lawsuit accuses LinkedIn of violating the Stored Communications Act, Breach of Contract, and the California Unfair Competition Law. Last year, LinkedIn quietly started training AI models on your LinkedIn data -- and added a button in the settings menu that lets you opt out of your data being used to train AI. But this setting is on by default, meaning some users may not have been aware that their data was being used automatically. Shortly after this announcement, UK regulators raised user data privacy concerns, and LinkedIn stopped training on UK-based users. LinkedIn previously said it also isn't using EU user data or Switzerland-based user data. At the time, LinkedIn didn't specify exactly who it's sharing user data with to train said AI tools, instead stating that its "affiliates" would be given the data. When I previously pressed LinkedIn for clarification, a spokesperson told me that "affiliates" refer to any Microsoft-owned company (but not Microsoft-backed OpenAI). That said, however, Microsoft has acquired more than 270 companies since 1986, including five AI companies, so it remains unclear as to who exactly is using this data. The lawsuit suggests that "another provider" is using the LinkedIn customer data for AI training. "Private discussions could surface in other Microsoft products, and customers' data is now permanently embedded in AI systems without their consent, exposing them to future unauthorized use of their personal information," the complaint argues, adding: "LinkedIn has not offered to delete the data from the existing AI models or retrain them to eliminate their reliance on the disclosed information." LinkedIn, however, denies the claims in the lawsuit, telling multiple news outlets: "these are false claims with no merit." The filing is named De La Torre v LinkedIn Corp, US District Court, Northern District of California, No. 25-00709. The Plaintiff is seeking $1,000 in damages and potentially other relief as compensation. If you want to opt out from LinkedIn using your data to train AI, you can turn off this setting by going to Settings > Data Privacy > Data for Generative AI Improvement and turn the toggle off.
[12]
LinkedIn may snoop on your private messages to train AI
Sharing isn't always caring, which might seem to be the case in a lawsuit in which LinkedIn was accused of sharing users' private messages with other companies to train AI models in August of last year, according to the BBC. A LinkedIn Premium user files the lawsuit in California and on behalf of "all others" in the same situation. The lawsuit claims LinkedIn was aware of its actions by saying, "This behavior suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimize public scrutiny." The lawsuit also says, "LinkedIn's actions... indicate a pattern of attempting to cover its tracks." However, a LinkedIn spokesperson told BBC News that "these are false claims with no merit." Recommended Videos The lawsuit also includes accusations of LinkedIn allegedly adding a privacy setting that automatically opted users into a program that allowed third parties to use their personal information to train AI. Not only that, but the lawsuit also claims LinkedIn tried to cover its tracks by changing its privacy policy to say user information might be disclosed for AI training purposes. Also, it keeps accusing LinkedIn of changing the frequently asked questions section to say users can choose not to share their data for AI purposes, but that it doesn't affect already completed training. The lawsuit seeks $1,000 per user for alleged violations of the U.S. Federal Stored Communications Act and an unspecified amount regarding the alleged breach of contract and California's unfair competition law. In addition, an email LinkedIn sent to users last year claims user data sharing was not enabled for user data sharing for AI purposes in countries such as Switzerland, the UK, and the European Area. This isn't the first lawsuit LinkedIn has faced since it paid $13 million to settle a class-action lawsuit following complaints that the company was sending too many emails to users without permission. However, LinkedIn will not likely send $20 checks as it did with a class-action lawsuit.
[13]
Microsoft's LinkedIn sued for disclosing customer information to train AI models
(Reuters) - Microsoft's LinkedIn has been sued by Premium customers who said the business-focused social media platform disclosed their private messages to third parties without permission to train generative artificial intelligence models. According to a proposed class action filed on Tuesday night on behalf of millions of LinkedIn Premium customers, LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data. Customers said LinkedIn then discreetly updated its privacy policy on Sept. 18, 2024 to say data could be used to train AI models, and in a "Frequently Asked Questions" hyperlink said opting out "does not affect training that has already taken place." This attempt to "cover its tracks" suggests LinkedIn was "fully aware" it violated customers' privacy, and its promise to use personal data only to support and improve its platform, to minimize public scrutiny and legal fallout, the complaint said. The lawsuit was filed in the San Jose, California, federal court on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was disclosed to third parties for AI training before Sept. 18. It seeks unspecified damages for breach of contract and violations of California's unfair competition law, and $1,000 per person for violations of the federal Stored Communications Act Microsoft did not immediately respond on Wednesday to requests for comment. A lawyer for the plaintiffs had no immediate additional comment. The lawsuit was filed several hours after U.S. President Donald Trump announced a joint venture among Microsoft-based OpenAI, Oracle and SoftBank, with a potential $500 billion of investment, to build AI infrastructure in the United States. The case is De La Torre v LinkedIn Corp, U.S. District Court, Northern District of California, No. 25-00709. (Reporting by Jonathan Stempel in New York; Editing by Richard Chang)
[14]
Lawsuit alleges LinkedIn shared DMs for third-party AI training
Summary LinkedIn is being sued for allegedly sharing private user data for AI training without proper consent. Suit seeks $1,000 per affected user. LinkedIn denies wrongdoing and claims the lawsuit's allegations have "no merit." How training data for AI is sourced has been a point of contention for as long as companies have been training AI models. Corporations assert that training AI on publicly accessible information on the internet constitutes fair use of that information, because the models that incorporate data parsed from YouTube videos or blogs, for example, are transformative -- in other words, that they fundamentally change the data they've absorbed before redistributing it, creating legally distinct works. Whether that argument can withstand long-term legal scrutiny remains to be seen. Microsoft-owned LinkedIn has found itself at the center of a different kind of AI training data controversy. As reported by the BBC (and spotted by TechRadar), a California lawsuit accuses LinkedIn of sharing private user data -- including user-to-user direct messages -- with third parties as fodder for AI training, without adequately notifying users or giving them the opportunity to opt out of the arrangement. According to the suit, LinkedIn "quietly" introduced a new privacy setting that automatically opted users into a program that shares user data with third parties for the purposes of training AI. The suit also alleges that LinkedIn updated its FAQ section to say that users had the option not to share data in this way, but also that opting out wouldn't have any effect on data that had already been shared. Related What is LinkedIn and how do you use it? LinkedIn can be the missing key to help boost your professional career Posts LinkedIn denies any wrongdoing The suit alleges that LinkedIn's policies described here were in violation of the Stored Communications Act. It's seeking damages of $1,000 for each user affected. In part, the lawsuit reads that LinkedIn's actions "indicate a pattern of attempting to cover its tracks." For LinkedIn's part, a company spokesperson told BBC that the suit's assertions "are false claims with no merit."
[15]
Microsoft's LinkedIn sued for disclosing customer information to...
Microsoft's LinkedIn has been sued by Premium customers who said the business-focused social media platform disclosed their private messages to third parties without permission to train generative artificial intelligence models. According to a proposed class action filed on Tuesday night on behalf of millions of LinkedIn Premium customers, LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data. Customers said LinkedIn then discreetly updated its privacy policy on Sept. 18, 2024, to say data could be used to train AI models, and in a "Frequently Asked Questions" hyperlink said opting out "does not affect training that has already taken place." This attempt to "cover its tracks" suggests LinkedIn was "fully aware" it violated customers' privacy, and its promise to use personal data only to support and improve its platform, to minimize public scrutiny and legal fallout, the complaint said. The lawsuit was filed in the San Jose, Calif., federal court on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was disclosed to third parties for AI training before Sept. 18. It seeks unspecified damages for breach of contract and violations of California's unfair competition law, and $1,000 per person for violations of the federal Stored Communications Act. Microsoft did not immediately respond on Wednesday to requests for comment. A lawyer for the plaintiffs had no immediate additional comment. The lawsuit was filed several hours after President Donald Trump announced a joint venture among Microsoft-based OpenAI, Oracle and SoftBank, with a potential $500 billion of investment, to build AI infrastructure in the United States. The case is De La Torre v LinkedIn Corp, U.S. District Court, Northern District of California, No. 25-00709.
[16]
LinkedIn used private DMs to train AI, lawsuit says
How Nike's new CEO plans to revive the struggling sportswear giant LinkedIn has been sued by its Premium customers for allegedly disclosing personal information to train AI models without their consent. The lawsuit, filed Tuesday in the U.S. District Court for the Northern District of California, proposes a class-action status on behalf of millions of LinkedIn Premium users. The suit alleges that LinkedIn, starting in August 2024, introduced a privacy setting that automatically enrolled users into a data-sharing program aimed at training AI models. The plaintiffs argue that this move was made without adequate user consent and was disguised under a privacy policy update on Sept. 18, allowing the platform to access and use private messages and other user data for AI development purposes. According to the lawsuit, LinkedIn's Premium customers, particularly those who exchanged messages using the platform's InMail feature, claim that their confidential information was shared with third-party entities to assist in AI development. The legal filing suggests that LinkedIn's actions were not only intentional but were also kept from the users, contravening the original promise of using personal information solely for platform enhancements. LinkedIn, which is owned by Microsoft, has called the allegations baseless. A company spokesperson dismissed the accusations as unfounded, maintaining that LinkedIn adheres to strict privacy standards to protect user data. The plaintiffs, who are seeking unspecified financial damages, have indicated that if successful, each participant in the case may be awarded $1,000 in compensation. This legal challenge reflects a broader trend of increasing scrutiny on how tech companies handle user data, especially with the burgeoning use of generative AI tools across various sectors like finance, retail, and more. The lawsuit also raises significant questions about user consent and transparency. It puts a spotlight on the industry's practices of leveraging vast datasets, often compiled through seemingly innocuous user interactions, to enhance AI capabilities. Recent years have seen growing pressure on major tech firms to ensure transparency and obtain explicit consent from users before using their data for purposes beyond the immediate scope of service. As AI continues to integrate more deeply into corporate strategies, balancing technological advancement with ethical considerations around data privacy remains a critical challenge. Should the court rule against LinkedIn, it may set a precedent in how user consent for data usage must be approached. It would also underscore the necessity for tech companies to clearly communicate privacy policies and opt-in mechanisms for data usage.
[17]
LinkedIn: Proposed Class Action Lawsuit's Claims Have 'No Merit' | PYMNTS.com
LinkedIn is facing a proposed class action lawsuit that alleges that the social media platform disclosed Premium customers' private InMail messages to third parties without permission to train generative artificial intelligence (AI) models. The proposed class action was filed Tuesday (Jan. 21) night, Reuters reported Wednesday (Jan. 22). It alleges that LinkedIn was "fully aware" it violated users' privacy and attempted to "cover its tracks" when it updated its privacy policy in September to say that data could be used to train AI models and added in an FAQ link that using a privacy setting that was introduced in August to opt out of sharing their personal data "does not affect training that has already taken place," according to the report. The proposed class action seeks damages for breach of contract, violation of California's unfair competition law and violations of the federal Stored Communications Act, per the report. Reached by PYMNTS, a LinkedIn spokesperson said in an emailed statement: "These are false claims with no merit." LinkedIn's privacy policy update and use of users as unwitting AI trainers ignited a firestorm over data privacy and consumer trust, PYMNTS reported in September. The company's move could force businesses to reconsider their digital footprint due to the risk of compromising sensitive information, David McInerney, commercial manager for data privacy at Cassie, told PYMNTS at the time. "A whopping 93% [of consumers] are concerned about the security of their personal information online," McInerney said. Apple recently agreed to pay $95 million to settle a privacy lawsuit that alleged that when its voice assistant Siri was activated unintentionally, it shared private discussions it overhead with Apple, and that Apple shared these communications with third parties without users' consent. The company told 9to5Mac at the time that it uses "Siri data to improve Siri." "Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose," the company said, per the report. "Apple settled this case to avoid additional litigation so we can move forward from concerns about third-party grading that we already addressed in 2019."
[18]
Microsoft's LinkedIn Hit With Lawsuit For Violating User Privacy, Allegedly Sharing Private Data For AI Model Training - Microsoft (NASDAQ:MSFT)
LinkedIn, owned by Microsoft Corporation MSFT, has been accused by Premium users of sharing private data for AI model training without their consent. What Happened: The class action lawsuit, filed in San Jose, California on Tuesday seeks damages for breach of contract and violations of California's unfair competition law. It also demands $1,000 per person under the federal Stored Communications Act, reported Reuters. See Also: Apple Stock Has Moved Up 4.2% Since iPhone 16 Launched, Analyst Says It Has Another 8% Upside As Manufacturing Cost Decline Bolsters Cupertino's Margins The lawsuit alleges that LinkedIn quietly introduced a privacy setting last August, allowing users to control data sharing. However, the platform updated its privacy policy on Sept. 18, stating data could be used for AI training. In a "frequently asked questions" hyperlink, it was stated that opting out would not impact training that had already occurred. The complaint accuses LinkedIn of attempting to "cover its tracks" by updating its policy, suggesting the company knowingly violated privacy agreements. LinkedIn has dismissed the claims as "false" and "without merit." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: AI startups OpenAI and Anthropic have also faced allegations of disregarding web scraping regulations. Elon Musk's social media platform, X, formerly Twitter, was revealed to be sharing user posts with xAI's Grok for training purposes. Last year, LinkedIn reported a 10% year-over-year revenue increase in the first quarter. The platform highlighted the role of its AI-powered tools in changing how professionals sell, learn, and hire. In sales, new AI features are helping teams emulate top performers and achieve more profitable growth. In hiring, LinkedIn introduced its first AI agent, Hiring Assistant, designed to streamline recruitment by automating time-intensive tasks. Microsoft reported first-quarter revenue of $65.60 billion, reflecting a 16% year-over-year increase. Price Action: Microsoft's stock climbed 4.13% on Wednesday, closing at $446.20. However, in after-hours trading, the shares dipped 0.27% to $445, according to data from Benzinga Pro. Check out more of Benzinga's Consumer Tech coverage by following this link. Photo by IB Photography on Shutterstock Read Next: Apple CFO Denies Claims Of 75% Profit Margins On App Store: 'I Wouldn't Say They're Accurate' Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. MSFTMicrosoft Corp$445.003.85%Overview Rating:Speculative50%Technicals Analysis660100Financials Analysis400100WatchlistOverviewMarket News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
LinkedIn is embroiled in a class-action lawsuit accusing the platform of using private messages from Premium users to train AI models without consent, raising concerns about data privacy and ethical AI development practices.
LinkedIn, the Microsoft-owned professional networking platform, is facing a class-action lawsuit alleging the unauthorized use of private messages from Premium users to train artificial intelligence (AI) models. The lawsuit, filed in the U.S. District Court for the Northern District of California, accuses LinkedIn of violating user privacy and breaching its contract with paying customers 1.
The legal complaint centers around LinkedIn's introduction of a new privacy setting in August 2024. According to the lawsuit, this setting automatically opted users into sharing their personal data with third parties for AI training purposes 2. The plaintiffs allege that LinkedIn subsequently updated its privacy policy on September 18, 2023, to include a clause stating that user data could be used for training AI models 3.
The lawsuit accuses LinkedIn of "covering its tracks" by retroactively altering its privacy policies and statements 1. It claims that LinkedIn made these changes discreetly to avoid public backlash and legal consequences 2. The complaint argues that this behavior suggests LinkedIn was fully aware of violating its contractual promises and privacy standards 5.
The lawsuit specifically focuses on LinkedIn Premium users, who pay for enhanced privacy features 4. The plaintiffs argue that LinkedIn's actions are particularly egregious as Premium members pay fees for subscriptions that include heightened privacy protections 4. The lawsuit seeks damages of $1,000 per plaintiff under the federal Stored Communications Act, as well as unspecified amounts for violations of California's unfair competition law and breach of contract 1 5.
A LinkedIn spokesperson has denied the allegations, stating, "We are not using member messages to train models as alleged in the complaint" 1. The company maintains that the claims in the lawsuit are false and without merit 5.
This lawsuit comes at a time when AI companies are facing challenges in sourcing training data. A recent MIT study found that over 28% of critical sources in a major internet text dataset used for AI training are now restricted 3. As a result, some organizations have begun charging AI companies for access to their data, while others, like Meta, have also updated their privacy policies to allow the use of user data for AI training 3.
Reference
[4]
A class action lawsuit against LinkedIn, alleging misuse of user data for AI training, has been dismissed after the company provided evidence refuting the claims. The case highlights growing concerns over data privacy in AI development.
3 Sources
3 Sources
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
4 Sources
LinkedIn faces scrutiny over its use of user data for AI training without explicit consent. The company's actions have sparked debates about data privacy and ethical AI development practices.
3 Sources
3 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
8 Sources
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
19 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved