14 Sources
[1]
WeTransfer Backtracks on AI File Training After Backlash: What You Need to Know
The company has updated changes in its policies after some users objected to new terms. WeTransfer, the service that allows users to send large files to others, is explaining itself to users and updating its terms of service after a backlash related to training AI models. The company published a blog post, "WeTransfer Terms of Service -- What's Really Changing" that details more updates the company made to its policies, after users noticed that recent changes seemed to suggest WeTransfer was training AI models on the files users are transferring. In the blog post, the company says: "First things first. Your content is always your content." The post goes on to say, "we don't use machine learning or any form of AI to process content shared via WeTransfer." WeTransfer explains that its use of AI would be to improve content moderation and enhance its ability to prevent the distribution of harmful content across its platform. The company adds those AI tools aren't being used and haven't been built yet. "To avoid confusion," it says, "we've removed this reference." A representative for WeTransfer did not immediately return an email seeking further comment. The backlash over the terms prompted users such as political correspondent Ava Santina to write on X, "Time to stop using WeTransfer who from 8th August have decided they'll own anything you transfer to power AI." Anxieties are high about what information that users share or store in services such as social media accounts are accessed by companies to train AI models. WeTransfer may be used for highly sensitive file transfers, raising fears that private information might be accessed by AI. This isn't the case, according to the company.
[2]
WeTransfer: We Won't Use Your Files to Train AI
WeTransfer has clarified that it won't use the files you share to train AI models. In the last day, the company has seen significant backlash after users noticed new terms said it would be able to train AI models on any files shared directly with WeTransfer. The company acted quickly to update its terms and conditions to reassure users that it won't use files to train AI models. In a statement to the BBC, a WeTransfer spokesperson said, "We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties." Since around late June or early July, the company's terms have shown changes expected to take effect on August 8. WeTransfer then changed its conditions with a further update on July 15. The first change in terms that sparked the backlash read, "You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy." The statement also said WeTransfer owns "the right to reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content." It also said users wouldn't see any payment when their content is used. In WeTransfer's statement to the BBC, it said it had now "made the language easier to understand." The company says it originally changed its terms to "include the possibility of using AI to improve content moderation" rather than it being designed to use submissions to train AI models. The company was swift to update its policy, and there's now no mention of AI models within this section of the policy. The shorter section now reads, "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy." WeTransfer has also begun replying to users on social media who were upset about the changes. In its boilerplate messaging to users it notes "We do not use AI to process your files."
[3]
WeTransfer ToS adding 'machine learning' caused freakout
WeTransfer added the magic words "machine learning" to its ToS and users reacted predictably Analysis WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users. The topic? Granting licensing permissions for an as-yet-unreleased LLM product. Agentic AI, GenAI, AI service bots, AI assistants to legal clerks, and more are washing over the tech space like a giant wave as the industry paddles for its life hoping to surf on a neural networks breaker. WeTransfer is not the only tech giant refreshing its legal fine print - any new product that needs permissions-based data access - not just for AI - is going to require a change to its terms of service. In the case of WeTransfer, the passage that aroused ire was: In a statement released during the backlash, WeTransfer insisted it had zero intention of abusing anyone's intellectual property, saying it had made the change to cover an upcoming moderation service. It said it was merely considering the "possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform." The feature hasn't been built or used "in practice," but it was "under consideration," said the file transfer tool. "To avoid confusion, we've removed this reference." But users were not happy about the wording, which has since been removed, and took to social media and The Register to say so, with one telling us: "Given that one of the common use cases is to securely transfer sensitive content between users, this is a gross violation of privacy and they should be called out until they roll back this change or lose all their customers." Coming in for special ire was the phrase: "You will not be entitled to compensation for any use of Content by us under these Terms." WeTransfer, for its part, didn't immediately need to ask for a ToS tweak. In fact, the cloud storage company told us this morning: "In retrospect, we would have excluded the mention of machine learning entirely as we don't use machine learning or any form of AI to process content shared via WeTransfer." It added: "We regret that our terms caused unnecessary confusion. We recognize AI is a sensitive and important topic for the creative community that can elicit strong reactions." You can read the complete old and new clause, and a fuller explanation from the company, here. Speaking to us about ToS tweaks more generally, specialist senior solicitor Neil Brown, who runs tech-savvy English law firm decoded.legal, told us: "In terms of the company, if it wants to do something which requires more permissions from the user (e.g. a copyright licence) than its terms currently provide, then the company may well reasonably conclude that the best thing to do, for its own protection, is to ensure that the rights it is granted under its contract with its users covers what it needs to do." When we asked whether cloud services generally need permissions when it comes to copyright just to store and process files, Brown told The Reg: "I can't speak for all jurisdictions / regimes around the world, and the position may vary, but at least in the UK, we have the notion of an 'implied license.' "So if a company provided - say - a hosted storage offering, and did not seek an explicit grant of a licence from the user for any copying inherent in providing that service, the company is likely to claim that, nevertheless, it had an implied license from the user for this purpose." He added: "The challenge with implied licenses comes down to mis-matching expectations: is what the company wants to do with the copyright works what the user intends the company to do? If not, there is the possibility of the user claiming that the company has acted unlawfully - that is has infringed the user's copyright. "So, in practice, most companies will try to include some sort of language in their terms which grants them all rights necessary to provide the services, or something like that." He added: "Some organizations will try to be more specific, but others will see that as a potential barrier to changing the services, if they would then also need to change their terms of service." But since techies tend to watch these things closely, being told explicitly what is happening matters, and seeing a change without a full explanation can create more trouble for companies. Back in 2023, WeTransfer's file-sharing rival, Dropbox, also had to fend off claims it was using uploaded files to train LLMs when a customer - a user named Werner Vogels who also happens to be the CTO of Amazon - noticed a toggle switch users could opt into to "use AI from third-party partners" to "work faster in Dropbox." After the backlash, Dropbox CEO Drew Houston set Vogels straight, responding: "Third-party AI services are only used when customers actively engage with Dropbox AI features which themselves are clearly labeled." Nonetheless, as The Register said at the time, the move fed into the so-called "AI trust crisis," where developer Simon Willison mused that many people no longer trust what big tech or AI entities say. As Willison says, trust matters. "People both overestimate and underestimate what companies are doing, and what's possible. This isn't helped by the fact that AI technology means the scope of what's possible is changing at a rate that's hard to appreciate even if you're deeply aware of the space." Others took issue with more than just the legal aspects, with open standards boffin Terence Eden saying that maybe netizens should stop flinging files at each other altogether. In a post he titled "We've got to stop sending files to each other," he writes:
[4]
WeTransfer says files not used to train AI after backlash
The firm has now updated its terms, saying it has "made the language easier to understand" to avoid confusion. WeTransfer said the clause was initially added to "include the possibility of using AI to improve content moderation" and to identify harmful content. It appears to have been changed in late June or early July, according to snapshots taken on the Internet Archive. The terms had said WeTransfer could use content for purposes "including to improve performance of machine learning models that enhance our content moderation process". It also included the right for WeTransfer to "reproduce, distribute, modify," or "publicly display" files uploaded to the service. Some users on social media interpreted this as WeTransfer giving itself the right to share or sell the files uploaded by users to AI companies. People working in the creative industries, including an illustrator and an actor, posted on X to say they used the service to send work and were considering changing to alternative providers. WeTransfer said it updated the clause on Tuesday, "as we've seen this passage may have caused confusion for our customers." The clause in the terms of service now says: "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy." The rival file-sharing platform Dropbox also had to clarify it was not using files uploaded to its service to train AI models, after social media outcry in December 2023.
[5]
How WeTransfer reignited fears about training AI on user data
The controversy underscores growing trust issues between users and tech firms over AI Dutch file-sharing service WeTransfer is under fire after users spotted sweeping updates to its terms of service that appeared to let the company train AI models on their uploaded files. The company has now removed the controversial language, but users remain outraged. Here's what's going on -- and why it matters. WeTransfer users discovered this week that the service had updated its policy with a clause granting it a perpetual, royalty‑free license to use user‑uploaded content, including for "improving machine learning models that enhance content moderation." The changes were due to come into effect on August 8. That language was vague enough that many users -- including children's book writer Sarah McIntyre and comedian Matt Lieb -- felt it opened the door for WeTransfer to use or even sell their files to train AI without permission or compensation. On Tuesday afternoon, WeTransfer scrambled to douse the flames, saying in a press release that it doesn't use user content to train AI, nor does it sell or share files with third parties. The company says it considered using AI to "improve content moderation" in the future, but that such a feature "hasn't been built or deployed in practice." WeTransfer has also now amended its terms of service, removing any mentions of machine learning. The updated version states that users grant the company "a royalty-free license" to use their content for "operating, developing, and improving the service." But the damage to user trust may already be done. WeChat joins a growing list of companies that have attracted criticism for training machine learning systems on user data. Adobe, Zoom, Slack, Dropbox, and others have also recently walked back or clarified similar AI-related policies after public outcry. All these incidents tap into wider frustrations around copyright and consent in the AI age -- and point to trust issues between users and tech firms. WeTransfer has long marketed itself as a creative-friendly, privacy-conscious file-sharing service. So it's perhaps unsurprising that the vague wording around AI and sweeping license rights felt like a betrayal to its users, particularly for artists and freelancers worried their work could be quietly fed into machine learning models without consent. While WeTransder did clarify its terms, for many users of the service, the damage was already done. In replies to WeTransfer's official announcement on X, some said that it looked like the service had tested the waters with broader AI permissions, got swift public backlash, and then quickly walked it back. WeTransfer is unlikely to be the last tech firm caught up in this kind of controversy. As AI fever spreads, user data is becoming the new fuel.
[6]
WeTransfer issues flurry of promises that it's not using your data to train AI models after its new terms of service aroused suspicion
The company moved fast to assure users it does not use uploaded content for AI training File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools. The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data "for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy." That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions. Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods. WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one's data would be used without their permission, especially for AI models. "From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We've since updated the terms further to make them easier to understand," WeTransfer wrote in the blog. "We've also removed the mention of machine learning, as it's not something WeTransfer uses in connection with customer content and may have caused some apprehension." While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform. If this feels a little like deja vu, that's because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company's fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate. The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there's uncertainty, and the companies have to make an extra effort to ease those tensions. Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers
[7]
WeTransfer clarifies it won't use your files to train AI amid user backlash
WeTransfer says it will not use your data to train AI models. Credit: Thiago Prudencio / SOPA Images / LightRocket / Getty Images WeTransfer was forced to respond this week after changes to its terms of service (TOS) triggered major backlash from users who believed the new language granted the service access to users' files to train AI. "We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties," a WeTransfer spokesperson told BBC News on Tuesday. WeTransfer clarified this after users noticed recent changes to its TOS page, which initially said the following policy would go into effect in August (via Wayback Machine on July 14, 2025). You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy. The language seemed to imply that WeTransfer could use data and files from users to train AI models, either their own or that of a third party. Outrage from users swiftly followed, many of whom are independent artists who use WeTransfer to send large files like film footage or music. This Tweet is currently unavailable. It might be loading or has been removed. Users took to social media to call attention to the change, with some vowing to switch to another service. The issue of users' content being used as AI training data is a contentious one that has become increasingly widespread as companies look to develop their own AI models and features; particularly since these tools that can automate creative work and have already impacted job markets. Users are wary of freshly-updated terms of service, since it could mean signing their data away to AI models and automating themselves right out of a job. Similar confusions happened with other platforms, like CapCut's policy update, which sounded alarming but is actually pretty standard. And Adobe had to clarify its policy changes last year, when the update made it sound like it was using creators' content to train its Firefly model without permission. However, companies like Google and Meta rely on user data to train their models, following the whole "if something free, you're the product" doctrine. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. But WeTransfer has changed the language in the content section of the policy after acknowledging to BBC News that the previous update "may have caused confusion for our customers." The company further clarified to the outlet that the original language was intended to "include the possibility of using AI to improve content moderation" for the purposes of identifying harmful content. The section now reads: You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy. In both versions of WeTransfer's TOS, the company states that the users own and retains all the rights to their work. That should clarify and confusion that WeTransfer was trying to take ownership of the work. Although WeTransfer does have a license to users' files, it's for the ostensive purpose of content moderation -- not to train AI models.
[8]
WeTransfer Changes Policy After Concern It Could Train AI on User's Photos
WeTransfer, a popular file-sharing platform used by photographers, has been forced into a climbdown after it faced a large backlash for changing its terms and conditions in a way that might allow the company to use customers' work to train AI models. The controversy began after a recent update to WeTransfer's terms appeared to grant the company broad rights over user content, including a clause referencing the use of data to "improve performance of machine learning models that enhance our content moderation process." This language raised alarms creative professionals, including photographers, some of whom interpreted the terms as giving WeTransfer permission to use, sell, or share their files with AI companies. In response to growing concern on social media, WeTransfer issued multiple clarifications and revised the disputed section of its terms. "We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties," a company spokeswoman tells the BBC. WeTransfer says that the original clause was intended to cover the potential use of AI in internal content moderation processes, not for training commercial AI models. "From your feedback, we understood that it may have been unclear that you retain ownership and control of your content," the company writes in a blog post. "We've since updated the terms further to make them easier to understand. We've also removed the mention of machine learning, as it's not something WeTransfer uses in connection with customer content and may have caused some apprehension." The updated terms, which take effect for existing users on August 8, now state: "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy." Previously, the terms also included the right to "reproduce, distribute, modify," or "publicly display" user content, which fueled further concerns that WeTransfer could share or monetize uploaded files. The company says the revised language is intended to prevent misinterpretation.
[9]
WeTransfer says user content will not be used to train AI after backlash
Firm revises new terms of service that had suggested uploaded files could be used to 'improve machine learning models' The popular filesharing service WeTransfer has said user content will not be used to train artificial intelligence after a change in its service terms had triggered a public backlash. The company, which is regularly used by creative professionals to transfer their work online, had suggested in new terms that uploaded files could be used to "improve machine learning models". The clause had previously said the service had a right to "reproduce, modify, distribute and publicly display" content, and the updated version caused confusion among users. A WeTransfer spokesperson said user content had never been used, even internally, to test or develop AI models and that "no specific kind of AI" was being considered for use by the Dutch company. The firm said: "There's no change in how WeTransfer handles your content in practice." WeTransfer revised the new terms of service on Tuesday to "make the language easier to understand" and removed any mention of machine learning or AI. The spokesperson added: "We hope that amending our legal terms to remove mention of machine learning and make the licensing conditions clearer will reassure those among our customers who were wondering what the update meant for them." The relevant section in the terms of service now reads: "You hereby grant us a royalty-free license to use your content for the purposes of operating, developing, and improving the service, all in accordance with our privacy & cookie policy." Some users of the service, including a voice actor, a film-maker and a journalist, had shared their discontent with the new terms on X and threatened to cancel their subscriptions. The use of copyright-protected work by AI companies has become a particularly sensitive issue for the creative industries, which argue that using their output without permission endangers their livelihoods by denying them income and also potentially helping to create tools that compete with their own work. The Writers' Guild of Great Britain said it was glad to learn that WeTransfer had "provided clarity" and said: "Members' work should never be used to train AI systems without their permission." WeTransfer said: "As a company with deep roots in the creative community, we hold our customers and their work in the highest regard. We will continue to work to make sure WeTransfer is the best product it can be for our customers." The company was founded in 2009 to allow users to send large files via email without creating an official account. The service is now used by 80 million monthly users across 190 countries.
[10]
WeTransfer backlash highlights need for smarter AI practices
A recent update to WeTransfer's terms of service caused consternation after some of its customers feared that it meant content from files uploaded to the popular file-sharing service would automatically be used to train AI models. But the Netherlands-based company insisted on Tuesday that this is not the case, saying in a statement that it "does not sell user content to third parties," and nor does it "use AI in connection with customer content." Recommended Videos The updated terms of service that prompted the criticism was sent to its customers earlier this month and marked as going into effect on August 8, 2025. The text stated that WeTransfer could use content shared on its service for purposes "including to improve performance of machine learning models that enhance our content moderation process." The new wording was widely interpreted as granting WeTransfer the right to use customer-uploaded files to train AI models. Many users reacting strongly, accusing WeTransfer of giving itself the right to share or sell customer content to AI companies hungry for fresh data to train their AI technologies. On Tuesday, WeTransfer tried to reassure its users by saying in a statement that "your content is always your content," and that "we don't use machine learning or any form of AI to process content shared via WeTransfer." It continued: "The passage that caught most people's eye was initially updated to include the possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform. Such a feature hasn't been built or used in practice, but it was under consideration for the future." It said that it had removed the mention of machine learning from its terms, "as it's not something WeTransfer uses in connection with customer content and may have caused some apprehension." The revised section now states: "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy." The controversial episode highlights the growing sensitivity among people toward having their content used for AI model training. Artists, musicians, and writers, for example, have been protesting strongly against AI companies using their work to train AI models without asking for permission or offering compensation. The troubling episode is also a lesson for other online companies to be clearer about how they're handling user data, as misunderstandings over AI can, as we've seen, quickly escalate into a major backlash.
[11]
Is WeTransfer Using Your Content to Train Its AI?
Perhaps the biggest user privacy issue of our time has to do, of course, with AI. AI models have an insatiable appetite for data, as the only way to improve them is to feed them new, high-quality information. As such, companies that develop these models turn to the most convenient pool of data they have access to -- which, unfortunately, belongs to their own users. WeTransfer is the latest company to come under fire for the practice. You may have seen the discourse already. On social media sites like Bluesky, angry WeTransfer users are blasting the company for a recent change in its terms of service. It's not hard to see why: The language in the new terms appears to clearly say that the company reserves the right to use your content to improve its AI models. "You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licenseable license to use your Content for the purposes of...improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process." There aren't many other ways to interpret that. Apparently, though, we did "misinterpret" the situation. A WeTransfer spokeswoman told BBC News as much, stating, "We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties." WeTransfer claims the wording in the new terms of service was meant to "include the possibility of using AI to improve content moderation," as well as identify "harmful content." Sure, Jan. Following the backlash, the company has changed the terms, and has "made the language easier to understand." By that, they must mean removing all references to using your content to train AI models, because that language simply doesn't exist anymore. Unrelated to AI use, the original terms also appeared to give WeTransfer the right to do whatever they want with your content. "Such license includes the right to reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content. You will not be entitled to compensation for any use of Content by us under these Terms." This language has been changed as well, to the following: "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy." Interestingly, the original language around licensing (but not AI training) does appear in regard to feedback you may provide WeTransfer. (Sure, WeTransfer, take a royalty-free license to do whatever you want with my feedback with no expectation of providing compensation. Just don't take my content to train your crummy AI.) WeTransfer's new terms go into effect Aug. 8, unless you're a "new user." Time will tell how many users, old and new, decide to ditch WeTransfer over the scandal.
[12]
Something for the weekend - WeTransfer sends itself into a nosedive, with a public nudge from AI
Despite the technology's promise, there are many bleak aspects of the AI age. A 'grimmest hits' of these might include Spotify pushing fake AI acts to listeners while CEO Daniel Ek leads a $600 million investment round in German AI weapons company, Helsing. (No jokes about vampires, please.) Or take Mikey Schulman, CEO of generative music giant Suno, who has admitted to scraping most high-res audio files off the internet to create a platform that gives users the illusion of authoring broadcast-quality songs. All based on the scraped work of real artists and producers. Or what about those publishers whose ChatGPT-authored articles reduce editorial costs but leave readers asking why they would spend even five seconds reading words that no one could be bothered to write? But perhaps the biggest tragedy for human agency, expertise, and talent is those cloud companies that are emboldened by the likes of Meta and other Big Techs' rapacious attitudes to our intellectual property. They see Zuckerberg's firm scraping millions of books from pirate libraries rather than licensing that content -- in the view of Judge Chhabria in his recent copyright judgement [LINK]. They see Google walling off the world's digitised content so they can dribble it back to us in AI-powered search, without linking to the data's source. And they see OpenAI... well, being OpenAI: a target of copyright lawsuits worldwide. Such wannabe vendors reasonably ask, if some of the richest corporations on the planet can stage a coup on creators' data -- without rightsholders' consent, credit, or remuneration -- then why can't they do it too? After all, this is the age of industrialised copyright theft, of data laundering at global scale, enabled by naïve policymakers who receive vendor press releases as gospel. Surely, ambitious 'me too' players could just grab and monetise our copyrighted content as well? Hell, perhaps they could charge users for the privilege of having their life's work scraped! On the face of it, the latest of these wannabes would appear to be file-sending platform, WeTransfer -- or rather, that is the strong impression that many users have formed from the company's recent statements. WeTransfer has shot itself in the foot several times this week, though it is not certain that the wounds are fatal. Even so, it is haemorrhaging customers and battling a PR catastrophe, while providing an object lesson in how not to handle a self-inflicted crisis. So, what has happened, and why are customers up in arms? A few days ago, the company updated its Ts and Cs. And those terms included two troubling paragraphs. The first of these stated: [By using the service] You grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sublicensable licence to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation purposes. Yes, here was a company saying that not only were we licensing our content to WeTransfer, in perpetuity, worldwide -- and paying for the privilege of doing so -- but WeTransfer could also license it to parties unknown. Why would it need to do that? (Doesn't a transfer delete after a set period? The 'perpetuity' clause suggests it must still exist on a server.) At this point we should remind ourselves who WeTransfer's customers are: creatives who use it to disseminate rich-media files -- images, video, audio, music, book layouts, and more -- to trusted professional contacts. Some of that work may be finished, but some very much 'in progress'. All of it may be copyrighted, and much of it confidential. The odd part was the machine learning clause. Of course, any cloud company must run automated checks that users aren't disseminating illegal material. But what else might a machine learning or generative AI system use our content for? And what about a technology third party: a lucrative source of high-res training data, perhaps? Note the vagueness of WeTransfer's wording: "developing, commercializing, and improving the Service or new technologies or services". Or is that reading too much into a boilerplate clause? Not according to the next paragraph, which was jaw-dropping in its hubris and arrogance: Such licence includes the right to reproduce, distribute, modify, prepare, derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content. You will not be entitled to compensation for any Content by us under these Terms. Wow. Why would any file transfer service need a perpetual, global licence for that? In our minds and trusting hearts, we are merely giving the company permission to move our content from A to B, nothing more. Not to publicly display it, perform it, modify it, or even broadcast it. Astonishing. Remember: WeTransfer is referring to copyrighted works. And creatives use its platform to send them privately to trusted contacts. Yet the wording of its terms might suggest to some that it is a giant mailroom in which workers rip open the parcels and sell the contents in the street. Rightly or not, this created a storm of outrage from the world's harried, devalued, and stressed-out creatives, who took to X, LinkedIn, and other platforms to express their anger and, in many cases, say they were cancelling their accounts. On LinkedIn, film and broadcast sound editor and mixer Andrei Chirosca observed: That text practically means 'we can fill every desktop, billboard, cinema, mesh, metro and t-shirt with your art or derivatives of it, we can sublicense (making available, commercializing) it to Palantir, The New York Post, Yachting Monthly, Vogue, Beyond The Bale, Alt2600 magazine, LeMonde, pqrnhub, CCP, NASA, The Boring company, and everyone else. He added: But it is still yours. Stripped of every right imaginable once it passes through our service. You retain the right to say, 'I made this' on social media. Quite. And his comment raised another point. Professionals like Chirosca often provide services to larger projects, such as a movie or TV show, the copyright of which might be owned by litigious multinationals. WeTransfer's terms -- as described in the company's words -- appear to drive a tank through the notion of discreet, professional services, and thus not only put others' copyrighted data at risk, but also leave freelancers in legal peril. So far, so horrifying. But there is a problem with the hysteria at WeTransfer's updated Ts and Cs. According to the company this week, most of those contractual terms already existed; only the clause about machine learning had been added. So, it seems that many of its users had never read the small print. But at least they have now! This tells us something else: creatives fear AI's impact on their careers. They worry that copyright will soon be meaningless, and that the financial benefits of their talent will accrue only in the coffers of rapacious corporations. And those deep, understandable concerns -- which I share -- have finally made them pay attention to their existing cloud providers. Seeking comment from WeTransfer, I rang the company yesterday and found the phone lines disconnected and the Press page on its website giving me a 'Not Found' message. The only option, therefore, was raising a Support ticket via a dropdown menu -- the kind normally reserved for a technical problem, rather than a wholesale breach of professional trust. (Is that a Subscription issue? Or does it fall under 'Other'?) At this point, however, WeTransfer's social team stepped in across the many platforms where users were venting their anger. But unfortunately, they took the time-honoured "sorry if you were offended / sorry if you misunderstood" approach, so beloved of companies that have been caught doing something dodgy. (Even if users had long ago consented to those terms without reading the small print first.) Your content is YOURS, the team explained to their remaining customers. Thanks, but what about that worldwide licence, in perpetuity, granted to do whatever the company wanted -- a licence so all-encompassing that it appeared to include broadcasting our work from the rooftops and selling to anyone who wanted it? The company said: We've seen that the new wording of our terms and conditions may have caused confusion. Here's what we meant. We wanted to clarify that we don't use machine learning or any form of AI to process content shared via WeTransfer. The updated Terms of Service mentioned machine learning to cover the future possibility of using AI to improve content moderation to further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform -- not to take the rights to artist's work and use it as we wish. Oh, really? So, what about: Such license includes the right to reproduce, distribute, modify, prepare, derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content. You will not be entitled to compensation for any Content by us under these Terms. The social team continued: WeTransfer has always been a platform that supports the vision of creatives, and that has not changed. We also wanted to clarify that the license conditions to allow WeTransfer to provide its service didn't change in substance compared to the previous terms. Because your content is yours, we need a licence from you to provide you with our service.In order to allow us to operate, provide you with, and improve the Service and our technologies (and to develop new ones), we must obtain from you certain rights related to Content that is covered by intellectual property rights. Ok, so what do the re-updated Ts and Cs say now? You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services. Such license includes the right to reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content. You will not be entitled to compensation for any use of Content by us under these Terms. We don't use machine learning or any form of AI to process content shared via WeTransfer. The passage that caught most people's eye was initially updated to include the possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform. Such a feature hasn't been built or used in practice, but it was under consideration for the future. To avoid confusion, we've removed this reference. For now, at least -- by the company's own admission. But the all-encompassing terms of its contract -- which manage to give WeTransfer sweeping powers over our content, while being vague about their purpose -- don't shut the door on ML or AI training. Far from it: all future possibilities would seem to be allowed. Even so, while the storm is understandable and reveals creatives' wider fears about AI, it is also the case that many users had simply failed to read the small print. They weren't presented with a dodgy contract this week, but ages ago instead. So, watch and learn, folks. This is what happens when Big Tech scrapes the Web and gets away with it. And when craven policymakers cave into their demands and propose changing copyright laws to opt us into this hellish new world. And it is also why an 'opt-out' is the kind of solution that only a congenital idiot would think might work. The reality is, there is almost no public cloud platform that creatives can meaningfully trust -- and that situation will only get worse if the 50 or so copyright lawsuits worldwide fail to find for the plaintiffs. And as for WeTransfer? I wish it luck for the future. But would I use it, having read its Ts and Cs? Well, would you?
[13]
WeTransfer Says it Won't Use Your Files to Train AI Models After Criticism
WeTransfer says that handling of content remains unchanged on the platform WeTransfer has issued a clarification that it will not use files uploaded by users to train artificial intelligence models, after users criticised the company's changes to its terms of service. Earlier this month, the file transfer service updated its terms of service to state that WeTransfer could use AI to improve its content moderation and "reproduce, distribute, modify" users files that were uploaded on the platform. WeTransfer says it has made changes to its terms again, removing references to the use of machine learning. In a blog post on Tuesday, the platform attempted to clarify its updated terms of service that are set to come into effect on August 8. At the time, section 6.3 of the document stated that WeTransfer users granted the company a "perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license" that would be used to operate, develop, commercialise, and improve the file transfer service. The amended version of WeTransfer's terms of service (removals in red) However, the updated terms of service also stated the license would allow the company to "reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform" original content uploaded by users to the platform. Meanwhile, the company would not be required to compensate creators for the use of the content, as described in the terms of service. Several WeTransfer customers, including content creators and creative professionals, expressed concern about the modified terms, and some said they would stop using the service. Following user backlash, the company explained in its blog post that section 6.2 of its terms of service (Ownership of Content) states that it doesn't claim ownership rights over user content. The service also stated that all "right, title, and interest, including all intellectual property rights" are held by the creator of the content and their licensors. Meanwhile, the company has modified section 6.3 of its terms of service, removing mentions of commercialising content and training machine learning models. It also deleted the portion that allowed the company to modify or reuse user content without compensating creators. WeTransfer says that handling of content remains unchanged on the platform, even after the new terms of service come into effect next month. The platform says the portion of the terms that mentioned machine learning was due to the "possibility of using AI to improve content moderation", but added that such a feature does not exist at the moment. While the company was forced into a climbdown due to user backlash following changes to its terms of service, the issue highlights how online platforms can quickly get access to user data by modifying their terms of service. Companies like Dropbox and Adobe had to issue clarifications in 2023 and 2024, respectively, with regard to accessing user content.
[14]
WeTransfer Clarifies Terms Of Service Amid AI Training Backlash
Amid backlash about using customer files to train AI models, file-sharing service WeTransfer has recently amended its terms and conditions to reflect that users continue to retain ownership of their content. The backlash came after the company originally updated its terms and conditions on July 1 to state that people using WeTransfer grant it a "perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license" to use their content. The purpose of this license was for WeTransfer to operate and improve its service, including the performance of machine learning models that enhance the company's content moderation process. Further, the terms and conditions stated that with this license, WeTransfer would have the right to "reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform" the content that users send via its service. "You will not be entitled to compensation for any use of Content by us under these Terms," the company added. Users were concerned that this would allow the company to train AI models on the data they send using the service, and about the company's rights to sell and distribute their work. In a blog post addressing user concerns, the company emphasised that users retain ownership over their content. So what do the amended terms say? Given the backlash, the company's amended terms now state that users grant WeTransfer a royalty-free license to use content "for the purposes of operating, developing, and improving" its services as per its privacy policy. The privacy policy further details these purposes, some of which include: * Providing distribution and payment collection features as well as facilitating and processing orders. * Sending content recipients a link to download user content * Serve and measure the effectiveness of both personalised and non-personalised ads * Analyse the WeTransfer service and its user base to improve the development of its services (like advertising), business activities (like marketing), and technologies. * Enforce the terms of service * Maintain the safety and security of the service. This includes detecting and acting against suspicions of illegal or unwanted activity or user content, such as Child Sexual Abuse Material (CSAM), malware, fraud, and copyright infringement. The rationale behind the original terms and conditions: WeTransfer explains that it does not use machine learning or any form of AI to process the content that users share via its service. It adds that it included the section about machine learning in its original update to "include the possibility of using AI to improve content moderation." However, it has not built such a feature yet. With regards to the licensing terms specified under the terms of service, the company explains that it has not made any changes to how WeTransfer handles content. "The change in wording was meant to simplify the terms while ensuring our customers can enjoy WeTransfer's features and services as they were built to be used," it said. Why it matters: This isn't the first time users have raised concerns about a tech company mentioning AI model training in its updated terms of service. For instance, in June 2024, in a very similar situation, Adobe updated its terms and conditions to say that it may access user content through automated and manual methods. The updated terms also said that users provide it a license to use, reproduce, publicly display, distribute, modify, and create derivative works based on users' content for operating and improving its service. Soon enough, Adobe clarified that it doesn't rely on users' content to train any generative AI tool. The swift backlash against both WeTransfer and Adobe demonstrates that consumers are becoming more vigilant about protecting their intellectual property and personal data from being used to develop AI systems sans explicit consent.
Share
Copy Link
WeTransfer, a popular file-sharing service, faced significant user backlash after updating its terms of service with language that appeared to allow AI training on user files. The company has since clarified its stance and revised its policy.
WeTransfer, a popular file-sharing service, found itself at the center of controversy after updating its terms of service with language that appeared to allow AI training on user files. The company has since clarified its stance and revised its policy, but the incident has reignited concerns about data privacy and AI training practices in the tech industry 1.
The initial update to WeTransfer's terms of service, set to take effect on August 8, granted the company a "perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license" to use user content for various purposes, including "improving performance of machine learning models that enhance our content moderation process" 2. This broad language sparked immediate concern among users, particularly those in creative industries who regularly use the service to transfer sensitive or copyrighted material.
Source: MediaNama
The backlash was swift and intense, with users expressing their concerns on social media platforms. Political correspondent Ava Santina wrote on X, "Time to stop using WeTransfer who from 8th August have decided they'll own anything you transfer to power AI" 1.
In response to the outcry, WeTransfer quickly issued clarifications and updated its terms. The company stated, "We don't use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties" 2. WeTransfer explained that the original change was intended to cover the possibility of using AI for content moderation in the future, but this feature had not been built or implemented 3.
Source: NDTV Gadgets 360
WeTransfer has since removed all mentions of machine learning from its terms of service. The updated clause now reads, "You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy" 4.
Despite these changes, some users remain skeptical. The incident has tapped into wider frustrations around copyright and consent in the AI age, highlighting growing trust issues between users and tech companies 5.
Source: The Register
WeTransfer's experience is not unique in the tech industry. Companies like Adobe, Zoom, Slack, and Dropbox have faced similar backlash over AI-related policies 5. These incidents underscore the importance of clear communication and transparency when it comes to data usage and AI implementation.
Legal expert Neil Brown of decoded.legal explains that companies often update their terms of service to ensure they have the necessary permissions for new features or technologies. However, he emphasizes the importance of matching user expectations with company intentions to avoid potential legal issues 3.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
3 hrs ago
20 Sources
Technology
3 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
3 hrs ago
17 Sources
Technology
3 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
3 hrs ago
7 Sources
Technology
3 hrs ago