2 Sources
[1]
AI's Napster Moment May Be Next
Two years ago this month, the Writers Guild of America went on strike for, among other key issues, a set of landmark AI protections to safeguard our writing and our finished work. It took six months of sustained solidarity, picketing and an outpouring of public support -- everyone from fellow unions, politicians and yes, even Elon Musk -- before the studios finally met our demands. Today, writers, journalists and creatives across every medium are facing a new kind of existential threat to their professions from the interests of Silicon Valley. Tech companies have been openly lobbying the White House to rewrite copyright law to freely train their models on films, television shows, articles, books and beyond without paying so much as a dime to studios, publishers or writers. OpenAI has been referring to this as "the freedom to learn." In New Jersey, it's just called theft. Three weeks ago, Shira Perlmutter, the Director of the U.S. Copyright Office, was fired shortly after publishing a new report on Generative AI Training. The release advocated for preserving current copyright law and signaled a setback for Sam Altman's craven attempt to expand the Fair Use doctrine into a smash and grab campaign on decades of copyrighted material. In other words, tech companies cannot use stolen intellectual property in their models. In the meantime, dozens of ongoing lawsuits against OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini) and Meta (Llama) making their way through the courts. Without a sympathetic judge willing to distort fair use, these companies could find themselves liable in billions of theft, potentially even more in damages, and in some cases, face total bankruptcy. But we've been down this road before with Napster. The company launched as a free file-sharing app in 1999. Almost overnight people around the world could download entire music libraries in seconds. It was a groundbreaking use of technology that, at the time, felt too good to be true. And that's because it was. Napster was quickly sued out of existence for copyright infringement and racketeering. Since then, every major media tech platforms -- YouTube, Spotify and even OpenAI until recently -- have been making deals with studios, publishers and labels to compensate them for using their copyrighted work. While many of these agreements have been deeply flawed and exploitive, the firing of Shira Perlmutter represents a pivot to something much worse. Silicon Valley is now asking the government for permission to steal our copyright in order to escape their pending lawsuits. They are knowingly trying to de-value our professions and countless others in order to freely enrich themselves without caring about the consequences. In the case of screenwriters, copyright of an original script is sold to the studios in exchange for core benefits like health care, pension and residuals. This is one of the bedrock principles that has helped sustain writers during the highs and lows of this business. But if copyright protections were to be stripped away, then so begins the unraveling of the entire value proposition that union members past and present have fought so hard to maintain in the film and television industry. In April, Sam Altman appeared on stage with Chris Anderson, the head of TED, who pointed out, "At first glance, this (ChatGPT) looks like I.P. theft." The audience erupted in applause until Sam fired back, "Clap about that all you want. Enjoy." And then Sam said something far more revealing, "We do need to figure out some new model around the economics for compensating creative output. Exactly what that looks like I'm not sure." But the copyright system isn't broken. Companies like OpenAI simply don't want to negotiate consent and pay. So, Sam can shrug about "the economics for compensating creative output" all he wants. Enjoy. The solution will always be the same. Stop stealing writers work and follow the law.
[2]
The dirty secret Big Tech doesn't want you to know: AI runs on theft
Artificial intelligence is one of the fastest-growing, most exciting industries in America. Many in the tech industry are confident that as AI continues to improve, it will become increasingly important in our everyday lives. But its growth has come at a cost -- and if we're not careful, AI's expansion could end up crippling other critical sectors of the American economy. Big Tech's dirty secret is that the success of its AI tools has been almost entirely built on theft. These companies are scraping enormous amounts of copyrighted content, without permission or compensation, to fuel their AI products -- and in the process dangerously undermining content creators' businesses. Instead of paying for access to copyrighted material -- everything from magazine columns to President Trump's own book "The Art of the Deal" -- most AI companies have made the conscious choice to steal it instead. They argue that all content, even content registered for protection with the US Copyright Office, should be considered "fair use" when used to build and operate AI models. To gather the data that powers their large language models, Big Tech companies have consistently bypassed paywalls, ignored websites' directives asking users not to copy material, and worse. Meta, for instance, used illegal Russia-based pirate site LibGen to copy the contents of at least 7.5 million books to train its Llama AI model -- an egregiously unlawful, copyright-violating workaround. Relying on questionable sources for AI training purposes poses a variety of serious problems, perhaps even to US national security. Recently an online data watchdog found that many of the most popular AI chatbots have absorbed millions of articles designed to spread Russian propaganda and outright falsehoods. Now infected by a Russian disinformation network known as "Pravda," these chatbots, including Grok, ChatGPT and Gemini, mimic Kremlin talking points when asked about certain topics -- and spread false narratives about 33% of the time. Content creators, meanwhile, face existential problems. In addition to seeing their content stolen for training purposes, publishers are now forced to watch as Big Tech companies make billions using that stolen content in ways that directly compete with publishers' business models. With retrieval-augmented generation, in which an AI model references outside sources before responding to user inquiries, many AI products now give users real-time information, pulled directly from recently published news articles. Those same AI companies run ads against that content -- generating revenue that should belong to those who invested in its creation. A user who gets all the information contained within an article directly through an AI chatbot has almost no reason to click through to the original text -- much less to buy a subscription, if the item is behind a paywall. The data on this is clear: AI chatbots drive referral traffic at a 96% lower rate than traditional Google search, an already shrunken relic of the past. For every 1,000 people using an AI chatbot, fewer than four will click through to see the original source of the information they're reading. As AI replaces traditional search for many users, this drop in referral traffic will cut deeply into both subscription and advertising revenue for publishers -- depriving them of the funds they need to produce the content consumers (and AI companies) rely on. AI companies are lobbying to legitimize this behavior, but Washington should take care. Tilting the scales in Big Tech's favor will undermine centuries of intellectual-property protections that have paid tremendous dividends for the United States, giving us countless advancements -- and a competitive edge on the world stage. Blessing the theft of American content would instantly erode our country's foundation as an innovation powerhouse. The news media industry supports a balanced approach. Many publications and journalists, in fact, now use AI to better serve their customers. But it's important to develop AI products responsibly and in coordination with the creators of the content they use, with the long-term sustainability of both AI companies and creators in mind. If AI companies' theft drives creators out of business, everyone ends up worse off. To protect their work, over a dozen members of the News/Media Alliance recently sued Cohere, Inc., a growing AI company, for unauthorized use of their content. They joined a number of other publishers, including News Corp and The New York Times, that are suing various AI companies to enforce their rights. Some in Big Tech are clearly beginning to recognize the problem with unfettered content theft. We've seen a rapid proliferation of licensing agreements, in which AI companies pay publishers to use their content, over the last year. A News/Media Alliance collective is currently licensing content at scale. But without reinforced legal protections against content theft, bad actors will continue to exploit publishers and creators -- undermining America's creative industries to further tech's own commercial interests.
Share
Copy Link
The AI industry faces accusations of copyright infringement as tech giants lobby for changes in copyright law, sparking debates on fair use and the future of content creation.
The artificial intelligence (AI) industry is facing a significant challenge as it grapples with accusations of copyright infringement and debates over fair use. Tech giants are lobbying for changes in copyright law that would allow them to freely train their AI models on copyrighted content without compensation, a move that has sparked intense controversy and legal battles 1.
Companies like OpenAI have been advocating for what they call "the freedom to learn," arguing that AI models should be allowed to train on copyrighted materials without payment or permission. This stance has been met with strong opposition from content creators and copyright holders who view it as theft of intellectual property 1.
The debate has intensified with the firing of Shira Perlmutter, the Director of the U.S. Copyright Office, shortly after publishing a report on Generative AI Training that advocated for preserving current copyright law. This move has been seen as a setback for tech companies seeking to expand the Fair Use doctrine 1.
Meanwhile, numerous lawsuits against major AI companies such as OpenAI, Anthropic, Google, and Meta are making their way through the courts. These legal challenges could potentially result in billions of dollars in liability for the tech giants 1.
Source: New York Post
Some observers draw parallels between the current situation and the Napster controversy of the early 2000s. Like Napster, which disrupted the music industry with its file-sharing technology, AI companies are now challenging established norms in content creation and distribution 1.
Content creators, including writers, journalists, and artists, are expressing concern about the potential devaluation of their work. The Writers Guild of America, which previously secured AI protections in a landmark strike, sees this as a new existential threat to creative professions 1.
Source: The Hollywood Reporter
Critics argue that AI companies are using questionable methods to gather training data. For instance, Meta allegedly used a Russia-based pirate site to copy millions of books for its Llama AI model, raising concerns about copyright violation and potential national security risks 2.
The AI industry's practices are not only raising legal and ethical questions but also threatening the economic models of content creators. AI-powered tools that provide information directly from copyrighted sources without driving traffic to original content could significantly impact publishers' revenue streams 2.
Some tech companies are beginning to recognize the need for a more balanced approach, with a growing number of licensing agreements being established between AI companies and content creators. However, without stronger legal protections, concerns remain about the sustainability of creative industries in the face of AI advancement 2.
As the debate continues, the outcome of ongoing lawsuits and potential changes in copyright law will likely shape the future relationship between AI technology and content creation, with far-reaching implications for both industries.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago