3 Sources
3 Sources
[1]
Patreon rejects "fair use" claims for AI training, calls for creator compensation
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. LLM Paying Machine: Many AI startups and Big Tech players try to justify the theft of data and user-generated content used in LLM training as fair use. Patreon, a company designed to fairly compensate human creators, completely rejects the argument. AI corporations must pay up - something they are already doing for major journalism outlets - or face the consequences. Jack Conte created Patreon to try and earn extra from his YouTube videos. The musician-turned-businessman is now managing a platform with 3 million monthly active users, and has plenty to say to big corporations operating chatbots and other AI platforms. First and foremost, these AI companies should stop crying foul and start paying content creators. Conte talked about AI business ventures during a recent SXSW conference in Austin, Texas. He described LLMs as yet another transformative moment for computer technology, on par with major transitions such as going from downloading music on iTunes to streaming. Change is not inherently bad, Conte said, adding that artists will survive the chatbot revolution and even thrive in the future. However, AI-focused corporations are doing something that the Patreon founder doesn't like at all: racking up massive troves of data to train or fine-tune their language models, without providing any compensation for the people who created it in the first place. OpenAI and other companies, including tech giants such as Microsoft and Google, have traditionally tried to justify this content theft as fair use. Conte thinks the argument is "bogus" because corporations are signing multi-million dollar deals with major rights holders and publishers. Disney, Condé Nast, Vox, and Warner Music have already secured their "fair" compensation from OpenAI and other AI ventures. "Why pay them and not creators - not the millions of illustrators and musicians and writers - whose work has been consumed by these models to build hundreds of billions of dollars of value for these companies?" Conte said. The Patreon CEO is clearly trying to join the fray with his platform by allowing the community of writers, artists, and programmers to capture some of the AI-derived compensation. Conte confirms that he has not chosen an anti-AI stance because this would amount to being against technology or change. Change happens either way, and Conte thinks LLMs and chatbots are here to stay. However, we should think about a future where artists can still be compensated while AI's text prediction machine ablates semantic nuance. In the end, the Patreon founder thinks that humans will keep enjoying other humans' work and artistic expression for the foreseeable future - no matter how complex and convincing AI lies become.
[2]
The Hypocrisy at the Heart of the AI Industry
Tech companies believe in intellectual property, but not yours. In April 2024, Eric Schmidt, the former Google CEO and a current AI evangelist, gave a closed-door lecture to a group of Stanford students. If these young people hoped to be Silicon Valley entrepreneurs, Schmidt explained, then they should be prepared to breach some ethical boundaries. At that point, 19 lawsuits had been filed against generative-AI companies for copyright infringement, alleging that Anthropic, OpenAI, and others had stolen books and other media to train their generative models. Yet Schmidt told the students to go ahead and download whatever they need to build an accurate "test" version of their AI product. If the product takes off, "then you hire a whole bunch of lawyers to go clean the mess up," he said. "If nobody uses your product, then it doesn't matter that you stole all the content." Stanford posted a video of the talk on YouTube in August 2024, but it was removed a day later. (Stanford did not respond to my request for comment about the removal.) When I recently obtained a copy, I was struck by Schmidt's readiness to say the quiet part out loud. He was articulating an attitude that is common in Silicon Valley but is usually stated as a legal or philosophical argument. When I reached one of Schmidt's spokespeople, they defended his position by telling me that Schmidt believes that the "fair use" of copyrighted work drives innovation. Others in the industry have cited the techno-libertarian idea that "information wants to be free," a frequently misunderstood credo that portrays information as a natural resource that should flow without restriction to whoever can use it. But the credo never seems to apply to Silicon Valley's own information, whether it's the troves of personal data that companies have collected about us or the software they write. Photoshop, for example, doesn't want to be free. In fact, Photoshop is one of thousands of tech-industry products that are protected by patents. Inventions such as Google's original search algorithm and even design details, such as the "rounded rectangle" shape of Apple's iPhone, have also been patented, and companies employ teams of high-end attorneys to prosecute infringements. The industry has long been a kind of intellectual-property battle zone, where damages in lawsuits frequently exceed nine figures. In 2017, for example, Waymo, Google's self-driving-car company, alleged that a former employee had stolen "confidential files and trade secrets, including blueprints, design files and testing documentation" for self-driving cars that were eventually shared with Uber. The case was settled for roughly $245 million. In the 2010s, Apple sued Samsung for copying elements of the iPhone and was initially awarded more than $1 billion in a patent-infringement battle that lasted seven years. Apple and Qualcomm have sued each other over IP in so many jurisdictions that it's hard to track. In the pursuit of generative AI, tech companies have recently turned their aggressive strategies toward less prepared industries. As my reporting has shown, many top AI models have been trained on data sets containing massive numbers of copyrighted books, videos, and other works. This large-scale piracy has been excused in a number of ways: OpenAI (which has a corporate partnership with The Atlantic's business team) has claimed that the company uses "publicly available information" to train its models; Anthropic has said that it has used books, but not in any commercial products; and Meta admits that it has used books in commercial products, but that doing so was "quintessential fair use." Even as they claim the right to train their models on work belonging to other people, the AI companies have rejected similar reasoning when it comes to their own products. Consider OpenAI's terms of service for ChatGPT, which forbid use of the bot's "output to develop models that compete with OpenAI." Anthropic, Google, and xAI have similar clauses forbidding people from using the material generated by their chatbots to train competing products. In other words: We can train on your work, but you can't train on ours. In the current economic environment, it's not surprising that companies vying for market dominance would operate with standards that serve their bottom line. But it's striking nonetheless how sharply their actions can contradict their professed values. Meta apparently does not want copies of its models on the web, even though it claims those models are "open," a word that typically means software is free and publicly available, and that implies a degree of goodwill or generosity on the part of the creator. It has reportedly sent notices demanding the deletion of such copies from online platforms. (Meta did not respond to a request for comment.) Companies also know the value of training data, and at least one of them foresaw the backlash that taking such data might create. In 2021, one year before OpenAI released ChatGPT and two years before my reporting first revealed what was being used as AI-training data, Anthropic CEO Dario Amodei wrote an internal memo titled "An Economic Model for Compensating Data Producers." (It was recently unsealed in a copyright-infringement lawsuit against the company.) In the document, Amodei acknowledges that AI could be "an increasingly extractive concentrator of wealth" and that creators might eventually "grumble" or "get mad" as this fact becomes apparent. Resistance from creators might slow down AI progress, Amodei writes, and for this reason, he suggests compensating them "with a fraction of the profits from the model produced." Giving creators equity in the company could be a "great fit" for Anthropic's "public benefit orientation," Amodei wrote. Today, Anthropic still claims to provide a public benefit, but it has argued in court that using copyrighted books is "fair use" -- meaning, essentially, that the authors are entitled to nothing. Anthropic declined to comment when I reached out for this article. Companies argue that AI training is fair use because their AI models produce original work that is not derived from the sources they use for training. This is not necessarily true: My reporting has shown that chatbots and image generators can produce near-exact copies of media they were trained on, spitting out near-complete copies of Harry Potter and the Sorcerer's Stone, for example, or rendering images that are fuzzy copies of existing artwork. But companies have tried to downplay this fact and focus the copyright discussion elsewhere, even invoking geopolitics and the idea of an international "AI race" as a sort of trump card. "Without fair use access, the race for AI is effectively over. America loses," OpenAI wrote to the Office of Science and Technology Policy last year. Not everyone in the AI industry is on the same page. Ed Newton-Rex, a former VP of audio at Stability AI, quit his job in November 2023 and wrote on X that, regardless of fair use, which "wasn't designed with generative AI in mind," he didn't see how current AI-training practices "can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright." Newton-Rex started a nonprofit called Fairly Trained, which certifies AI models that are trained on properly acquired data. It's worth noting that Silicon Valley has itself regularly been a victim of IP theft, in the form of software piracy. Partially in response to that problem, major companies have changed how software is distributed. Today, you cannot just buy Adobe Photoshop: Instead, you pay a rental fee to access the program, which verifies your license every time you use it. Microsoft has taken a similar approach with the 365 version of its Office suite, and Google's office software can't be downloaded at all. These companies have made their IP harder to steal by developing new methods of controlling access -- an option that is not realistically available to the artists, authors, and open-source-software developers they take material from. Given the double standard, it's difficult to tell whether Silicon Valley's arguments about fair use are genuine or just legally expedient. On one hand, generative AI is a new technology that raises new questions about the use of copyrighted work. On the other hand, the AI industry's aggressive approach is business as usual for Silicon Valley: moving fast and breaking things. And betting that the lawyers can "clean the mess up."
[3]
The CEO of Patreon blasts AI companies for the 'bogus excuse' they're using to not pay artists | Fortune
Patreon's CEO Jack Conte is tired of watching AI companies strike deals with huge corporations like Disney while ignoring the myriad of smaller creators who contribute to their models. Speaking at the South by Southwest conference this week, Conte, whose company allows people to pay their favorite creators directly, argued AI companies should view creators' work in the same way it views that of Disney, Conde Nast, or Warner Music, aiming to reach agreements with them rather than use their content without permission. He attacked the legal doctrine of "fair use," which allows someone to use copyrighted material without permission or payment depending on the purpose and character of the use, the nature of the original work, how much of the work was used, and whether the use harms the market. AI companies have cited fair use to justify using content to train or contribute to their models without paying. These companies often argue they are using copyrighted content in a "transformative" way and not just regurgitating it verbatim. For Conte, this legal "fair use" loophole is utter quackery. "The AI companies are claiming fair use, but this argument is bogus," Conte said during the conference. "It's bogus because while they claim it's fair to use the work of creators as training data, they do multimillion-dollar deals with rights holders and publishers like Disney, and Condé Nast, and Vox, and Warner Music." Conte pointed out the large licensing deals these AI companies have reached with intellectual property owners in recent years demonstrate the double standard of these companies. While AI companies recognize some copyrighted content requires permission and agreements, the same doesn't seem to be true for creator-made content. In the past several years, AI companies like OpenAI have made waves for the deals they have struck with some content owners while staving off lawsuits from others like the New York Times, which in 2023 accused OpenAI of training ChatGPT on millions of its articles without permission. In December, OpenAI, the AI giant led by CEO Sam Altman, struck a deal that saw Disney invest $1 billion in the company and licensed more than 200 characters to OpenAI so they could be featured in the company's video app, Sora. OpenAI has also signed licensing deals with Condé Nast, which owns The New Yorker and also with Vox Media, which owns New York Magazine. In November, Warner Music Group struck two separate licensing deals with music-focused AI companies Suno and Udio, after settling copyright suits with the companies. Conte mentioned these deals specifically to highlight the hypocrisy demonstrated by AI companies when deciding who gets a licensing agreement and who doesn't. Smaller creators, he claims, are being left out. "If it's legal to just use it, why pay?" Conte asked the crowd, according to TechCrunch. "Why pay them and not creators -- not the millions of illustrators and musicians and writers -- whose work has been consumed by these models to build hundreds of billions of dollars of value for these companies?" The AI companies' fair use claims have been called into question several times as AI models have become increasingly more popular. The New York Times filed a lawsuit in 2023 claiming OpenAI used millions of its articles without permission and that its large language model ChatGPT was in some cases regurgitating entire Times articles, potentially striking a blow to OpenAI's fair use argument. A date for the trial has not yet been set, but if the Times wins it could be owed billions in damages. More recently, dictionary makers Encyclopaedia Britannica and Merriam-Webster sued OpenAI after it rebuffed the companies' offer of a licensing agreement in 2024. The publishers claimed in the lawsuit that OpenAI's ChatGPT is cutting into their search traffic and ad revenue by absorbing the content created by their hundreds of human writers and editors. OpenAI rival Anthropic also settled a class action lawsuit by a group of authors to the tune of $1.5 billion in September. As a result of the case, the judge ruled that training an AI model on pirated books -- as the authors accused Anthropic of doing -- did not qualify as "fair use," but that training an AI model on purchased books qualified as legal transformative use. While Conte said he was not against AI, generally, and noted that change is inevitable, humans will continue to enjoy human-created content long into the future, he said. "Still, the AI companies should pay creators for our work, not because the tech is bad -- but because a lot of it is good, or it will be soon -- and it's going to be the future. And when we plan for humanity's future, we should plan for society's artists, too, not just for their sake, but for the sake of all of us. Societies that value and incentivize creativity are better for it," he said.
Share
Share
Copy Link
Patreon CEO Jack Conte has publicly challenged AI companies' fair use arguments, calling them bogus as firms like OpenAI strike multimillion-dollar licensing deals with Disney and Warner Music while using content from millions of smaller creators without payment. Speaking at SXSW, Conte demanded AI companies compensate content creators whose work trains models worth hundreds of billions.
Jack Conte, CEO of Patreon, has emerged as a vocal critic of how AI companies approach creator compensation, arguing that fair use claims mask a troubling double standard. Speaking at South by Southwest (SXSW) in Austin, Texas, Conte directly challenged the legal justifications used by OpenAI, Anthropic, and other tech giants for training large language models on content created by millions of artists, writers, and musicians without payment
1
3
.
Source: Fortune
The Patreon founder, who built his platform to help creators earn income from their work, described the AI industry's fair use argument as "bogus" because these same companies are simultaneously signing multimillion-dollar licensing deals with major rights holders. OpenAI has struck agreements with Disney for $1 billion, licensing over 200 characters for its video app Sora, while also securing licensing deals with Condé Nast and Vox Media. Warner Music Group reached separate licensing agreements with AI companies Suno and Udio after settling copyright suits
3
.The contradiction between how AI companies treat their own intellectual property versus that of content creators reveals deeper issues within the industry. While claiming that using copyrighted material for AI training constitutes transformative fair use, these same companies fiercely protect their own proprietary technologies through patents and restrictive terms of service
2
. OpenAI's terms explicitly forbid using ChatGPT output to develop competing models, and Meta has reportedly sent notices demanding deletion of copies of its models despite claiming they are "open"2
.Former Google CEO Eric Schmidt's leaked 2024 Stanford lecture exposed this mindset directly, advising aspiring entrepreneurs to download whatever AI training data they need and "hire a whole bunch of lawyers to go clean the mess up" if the product succeeds
2
. This approach treats smaller creators differently than major corporations, raising questions about why companies recognize some copyrighted content requires licensing agreements while treating creator-made content as freely available.
Source: The Atlantic
The legal landscape surrounding AI training has become increasingly contentious. The New York Times filed a lawsuit in 2023 claiming OpenAI used millions of its articles without permission, with ChatGPT allegedly regurgitating entire Times articles in some cases. The trial could result in billions in damages if the Times prevails
3
. Encyclopaedia Britannica and Merriam-Webster also sued OpenAI after the company rebuffed their licensing agreement offer in 2024, claiming ChatGPT cuts into their search traffic and ad revenue3
.Anthropic settled a class action lawsuit by authors for $1.5 billion in September, with a judge ruling that training an AI model on pirated books does not qualify as fair use, though training on purchased books could constitute legal transformative use
3
. By April 2024, 19 lawsuits had been filed against generative-AI companies for copyright infringement2
.Related Stories
Conte clarified that Patreon's stance on AI is not anti-technology but rather focused on ensuring creators capture value from systems built on their work. With Patreon serving 3 million monthly active users, the platform is positioned to advocate for the millions of illustrators, musicians, and writers whose work has been consumed to build hundreds of billions of dollars of value for Large Language Models (LLMs)
1
3
."Why pay them and not creators -- not the millions of illustrators and musicians and writers -- whose work has been consumed by these models?" Conte asked, highlighting the selective approach AI companies take when deciding who receives a licensing agreement
1
. While acknowledging that change is inevitable and that LLMs are here to stay, Conte emphasized that societies should plan for a future where artists can still be compensated and creativity remains incentivized. The debate over whether AI companies will be required to compensate content creators or can continue to disregard copyright laws will likely shape the industry's trajectory and determine whether smaller creators receive the same treatment as major media corporations like Disney when it comes to intellectual property protections.Summarized by
Navi
[2]
30 May 2025•Policy and Regulation

15 Jul 2025•Policy and Regulation

02 Apr 2025•Policy and Regulation

1
Policy and Regulation

2
Policy and Regulation

3
Technology
