Curated by THEOUTPOST
On Thu, 10 Oct, 12:02 AM UTC
4 Sources
[1]
Be careful about sharing false or AI-generated content on LinkedIn: the responsibility will be yours alone - Softonic
LinkedIn will continue to offer features that can generate automated content, but it is now the user's responsibility LinkedIn is shifting the responsibility to users for sharing misleading or inaccurate information created by its own AI tools, instead of the tools themselves, in a completely unexpected but logical move to absolve itself of any issues. A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement. Since no one can guarantee that AI-generated content is truthful or correct, companies protect themselves by placing the responsibility on users to moderate the content they share. The update follows in the footsteps of LinkedIn's parent company, Microsoft, which in early 2024 updated its terms of service to remind users not to take AI services too seriously, and to address the limitations of AI, warning that "it is not designed to be used as a substitute for professional advice." LinkedIn will continue to offer features that can generate automated content, but with the warning that it may not be reliable. The new policy reminds users that they must verify all information and edit it when necessary to comply with community guidelines. "Please review and edit such content before sharing it with others. As with all the content you share on our Services, you are responsible for ensuring that it complies with our Professional Community Policies, including not sharing misleading information," says LinkedIn. The social network site is likely hoping that its genAI models will improve in the future, especially since it now uses user data to train its models by default, requiring users to opt out if they do not want their data to be used.
[2]
LinkedIn says if you share fake or false AI-generated content, that's on you
LinkedIn is passing the responsibility onto users for sharing misleading or inaccurate information made by its own AI tools, instead of the tools themselves. A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement. Since no one can guarantee that the content generative AI produces is truthful or correct, companies are covering themselves by putting the onus on users to moderate the content they share. ThE update follows the footsteps of LinkedIn's parent company Microsoft, who earlier in 2024 updated its terms of service to remind users not to take AI services too seriously, and to address limitations to the AI, advising it is 'not designed intended, or to be used as substitutes for professional advice'. LinkedIn will continue to provide features which can generate automated content, but with the caveat that it may not be trustworthy. "Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes," the updated passage will read. The new policy reminds users to double check any information and make edits where necessary to adhere to community guidelines, "Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information." The social network site is probably expecting its genAI models to improve in future, especially since it now uses user data to train its models by default, requiring users to opt out if they don't want their data used. There was pretty significant backlash against this move, as GDPR concerns clash with generative AI models across the board, but the recent policy update shows the models still have a fair bit of training needed.
[3]
LinkedIn warns: you are responsible for sharing inaccurate content created by our AI
A hot potato: Companies that offer generative AI tools tend to advise users that the content being created might be inaccurate. Microsoft's LinkedIn has a similar disclaimer, though it goes slightly further by warning that any users who share this misinformation will be held responsible for it. Microsoft recently updated its Service Agreement with a disclaimer emphasizing that its Assistive AI is not designed, intended, or to be used as substitutes for professional advice. As reported by The Reg, LinkedIn is updating its User Agreement with similar language. In a section that takes effect on November 20, 2024, the platform states that users might interact with features that automate content generation. This content might be inaccurate, incomplete, delayed, misleading or not suitable for their purposes. So far, so standard. But the next section is something we don't often see. LinkedIn states that users must review and edit the content that its AI generates before sharing it with others. It adds that users are responsible for ensuring this AI-generated content complies with its Professional Community Policies, which includes not sharing misleading information. It seems somewhat hypocritical that LinkedIn strictly enforces policies against users sharing fake or inauthentic content that its own tools can potentially generate. Repeat violators of its policies might be punished with account suspensions or even account terminations. The Reg asked LinkedIn if it intends to hold users responsible for sharing AI content that violates its policies, even if the content was created by its own tools. Not really answering the question, a spokesperson said it is making available an opt-out setting for training AI models used for content generation in the countries where it does this. "We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used," the spokesperson continued. "The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our GenAI services do is help give them that assist." Another eyebrow-raising part in all this is that LinkedIn announced the upcoming changes on September 18, which is around the same time that the platform revealed it had started to harvest user-generated content to train its AI without asking people to opt-in first. The outcry and investigations led to LinkedIn later announcing that it would not enable AI training on users' data from the European Economic Area, Switzerland, and the UK until further notice. Those in the US still have to opt-out.
[4]
LinkedIn: If our AI gets it wrong, that's your problem
Artificial intelligence still no substitute for the real thing Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading. LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon. LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools. The relevant passage, which takes effect on November 20, 2024, reads: In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won't be held responsible for any consequences. The platform's Professional Community Policies direct users to "share information that is real and authentic" - a standard to which LinkedIn is not holding its own tools. Asked to explain whether the intent of LinkedIn's policy is to hold users responsible for policy-violating content generated with the company's own generative AI tools, a spokesperson chose to address a different question: "We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt-out setting for training AI models used for content generation in the countries where we do this. "We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used. The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our GenAI services do is help give them that assist." The business-oriented social networking site announced the pending changes on September 18, 2024 - around the time the site also disclosed that it had begun harvesting user posts to use for training AI models without prior consent. The fact that LinkedIn began doing so by default - requiring users to opt-out of feeding the AI beast - didn't go over well with the UK's Information Commissioner's Office (ICO), which subsequently won a reprieve for those in the UK. A few days later, LinkedIn said it would not enable AI training on member data from the European Economic Area, Switzerland, and the UK until further notice. In the laissez-faire US, LinkedIn users have had to find the proper privacy control to opt-out. The consequences for violating LinkedIn's policies vary with the severity of the infraction. Punishment may involve limiting the visibility of content, labeling it, or removing it. Account suspensions are possible for repeat offenders and one-shot account removal is reserved for the most egregious stuff. LinkedIn has not specified which of its features might spawn suspect AI content. But prior promotions of its AI-enhanced services may provide some guidance. LinkedIn uses AI-generated messages in LinkedIn Recruiter to create personalized InMail messages based on candidate profiles. It also lets recruiters enhance job descriptions with AI. It provides users with AI writing help for their About and Headline sections. And it attempts to get people to contribute to "Collaborative articles" for free by presenting them with an AI-generated question. Salespeople also have access to LinkedIn's AI-assisted search and Account IQ, which help them to find sales prospects. Asked to comment on LinkedIn's disavowal of responsibility for its generative AI tools, Kit Walsh, senior staff attorney at the Electronic Frontier Foundation, said, "It's good to see LinkedIn acknowledging that language models are prone to generating falsehoods and repeating misinformation. The fact that these language models are not reliable sources of truth should be front-and-center in the user experience so that people don't make the understandable mistake of relying on them. "It's generally true that the people choosing to publish a specific statement are responsible for what it says, but you're not wrong to point out the tension between lofty claims of the power of language models versus language like this in user agreements protecting companies from the consequences of how unreliable the tools are when it comes to the truth." ®
Share
Share
Copy Link
LinkedIn updates its User Agreement, making users accountable for sharing AI-generated content that violates platform policies, raising questions about AI reliability and user responsibility.
LinkedIn, the Microsoft-owned professional networking platform, is set to implement a significant update to its User Agreement on November 20, 2024. This change will shift the responsibility for sharing potentially inaccurate or misleading AI-generated content from the platform to its users 1.
The new agreement states that users may interact with features that automate content generation, but warns that such content "might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes" 2. LinkedIn emphasizes that users must review and edit AI-generated content before sharing it, ensuring compliance with the platform's Professional Community Policies 3.
This policy update places a significant burden on users to verify and potentially correct AI-generated content. Failure to do so could result in policy violations, with consequences ranging from content removal to account suspension or termination for repeat offenders 4.
LinkedIn offers various AI-enhanced services, including:
The platform has also begun using user-generated content to train its AI models by default, requiring users to opt-out if they don't want their data used [2].
This move has sparked controversy, with critics pointing out the apparent contradiction between LinkedIn strictly enforcing policies against users sharing inauthentic content while potentially generating such content through its own AI tools [3]. The Electronic Frontier Foundation's Kit Walsh noted the tension between "lofty claims of the power of language models versus language like this in user agreements protecting companies from the consequences of how unreliable the tools are" [4].
LinkedIn's data usage practices have faced scrutiny, particularly in regions with strict data protection laws. The platform has suspended AI training on user data from the European Economic Area, Switzerland, and the UK following investigations and public outcry [3]. However, users in other regions, including the US, must still opt-out if they don't want their data used for AI training [4].
LinkedIn's approach aligns with a broader industry trend of companies distancing themselves from the potential inaccuracies of their AI tools. Microsoft, LinkedIn's parent company, updated its terms of service earlier in 2024 to remind users not to take AI services too seriously and to acknowledge the limitations of AI [2].
Reference
[4]
LinkedIn, with its 930 million users, is using member data to train AI models, sparking a debate on data privacy and the need for transparent opt-out options. This practice has raised concerns among privacy advocates and users alike.
4 Sources
LinkedIn has stopped collecting UK users' data for AI training following regulatory scrutiny. This move highlights growing concerns over data privacy and the need for transparent AI practices in tech companies.
8 Sources
LinkedIn has been using user data to train its AI systems, sparking privacy concerns. The platform now offers an opt-out option for users who wish to exclude their data from AI training.
19 Sources
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
2 Sources
Meta is testing AI-generated posts in Facebook and Instagram feeds, raising concerns about user experience and content authenticity. The move has sparked debate about the role of artificial intelligence in social media platforms.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved