Curated by THEOUTPOST
On Fri, 14 Mar, 8:05 AM UTC
25 Sources
[1]
OpenAI declares AI race "over" if training on copyrighted works isn't fair use
OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use -- paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race. Currently, courts are mulling whether AI training is fair use, as rights holders say that AI models trained on creative works threaten to replace them in markets and water down humanity's creative output overall. OpenAI is just one AI company fighting with rights holders in several dozen lawsuits, arguing that AI transforms copyrighted works it trains on and alleging that AI outputs aren't substitutes for original works. So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired reported. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a major suit brought by The New York Times. "OpenAI's models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights," OpenAI claimed. "This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works." Providing "freedom-focused" recommendations on Trump's plan during a public comment period ending Saturday, OpenAI suggested Thursday that the US should end these court fights by shifting its copyright strategy to promote the AI industry's "freedom to learn." Otherwise, the People's Republic of China (PRC) will likely continue accessing copyrighted data that US companies cannot access, supposedly giving China a leg up "while gaining little in the way of protections for the original IP creators," OpenAI argued. "The federal government can both secure Americans' freedom to learn from AI and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material," OpenAI said. In their policy recommendations, OpenAI made it clear that it thinks funneling as much data as possible to AI companies -- regardless of rights holders' concerns -- is the only path to global AI leadership. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over," OpenAI claimed. "America loses, as does the success of democratic AI. Ultimately, access to more data from the widest possible range of sources will ensure more access to more powerful innovations that deliver even more knowledge." OpenAI asks Trump for more legal protections Currently, US-based AI companies are strained, OpenAI suggested, as hundreds of state laws attempt to regulate the entire AI industry. One legislative tracker from MultiState flagged 832 laws introduced in 2025 alone. Some of these laws, OpenAI warned, are modeled after strict European Union laws that OpenAI claimed the federal government should reject replicating due to alleged limits on innovation. Altogether, the patchwork of laws "could impose burdensome compliance requirements that may hinder our economic competitiveness and undermine our national security" since they will likely be harder to enforce against Chinese companies, OpenAI said. If Chinese models become more advanced and more widely used by Americans, China could manipulate the models or ignore harms to American users from "illicit and harmful activities such as identity fraud and intellectual property theft," OpenAI alleged. (OpenAI has accused DeepSeek of improperly using OpenAI's data for training.) To prevent the threatened setbacks to US innovation and risks to national security, OpenAI urged Trump to enact a federal law that preempts state laws attempting to regulate AI threats to things like consumer privacy or election integrity, like deepfakes or facial recognition. That federal law, OpenAI suggested, should set up a "voluntary partnership between the federal government and the private sector," where AI companies trade industry knowledge and model access for federal "relief" and "liability protections" from state laws. Additionally, OpenAI wants protections from international laws that it claims risk slowing down America's AI development. The US should be "shaping international policy discussions around copyright and AI and working to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress," OpenAI said. OpenAI suggested that this effort should also include the US government "actively assessing the overall level of data available to American AI firms and determining whether other countries are restricting American companies' access to data and other critical inputs." According to OpenAI, the Trump administration must urgently adopt these recommendations and others regarding rapidly adopting AI in government and methodically building out AI infrastructure, as China's open-sourced advanced AI model DeepSeek "shows that our lead is not wide and is narrowing." "The rapid advances seen with the PRC's DeepSeek, among other recent developments, show that America's lead on frontier AI is far from guaranteed," OpenAI said.
[2]
Google agrees with OpenAI that copyright has no place in AI development
In spite of sky-high costs and little in the way of profits, generative AI systems continue to proliferate. The Trump administration has called for a national AI Action Plan to guide America's burgeoning AI industry, and OpenAI was happy to use that as an opportunity to decry the negative effect of copyright enforcement on AI development. Google has also released its policy proposal, which agrees with OpenAI on copyright while also prompting the government to back the AI industry with funding and policy changes. Like OpenAI, Google has been accused of piping copyrighted data into its models, but content owners are wising up. Google is fighting several lawsuits, and the New York Times' lawsuit against OpenAI could set the precedent that AI developers are liable for using that training data without permission. Google wants to avoid that. It calls for "balanced copyright rules," but its preference doesn't seem all that balanced. The dearth of available training data is a well-known problem in AI development. Google claims that access to public, often copyrighted, data is critical to improving generative AI systems. Google wants to be able to use publicly available data (free or copyrighted) for AI development without going through "unpredictable, imbalanced, and lengthy negotiations." The document claims any use of copyrighted material in AI will not significantly impact rightsholders. According to Google's position, the federal government's investment in AI should also extend to modernizing the nation's energy infrastructure. Google says AI firms need more reliable power to keep training and running inference to advance AI. The company projects global data center power demand will rise by 40 gigawatts from 2024 to 2026. It claims the current US infrastructure and permitting processes are not up to the task of supplying the AI industry. If the government truly supports AI, according to Google, it will also begin implementing these tools at the federal level. Google wants the feds to "lead by example" by adopting AI systems with a multi-vendor approach that focuses on interoperability. It hopes to see the government release data sets for commercial AI training and help fund early-stage AI development and research. It also calls for an increase in public-private partnerships and greater cooperation with federally funded research institutions with initiatives like government-funded competitions and prizes for AI innovation.
[3]
OpenAI calls for US government to codify 'fair use' for AI training | TechCrunch
In a proposal for the U.S. government's "AI Action Plan," the Trump Administration's initiative to reshape American AI policy, OpenAI called for a U.S. copyright strategy that "[preserves] American AI models' ability to learn from copyrighted material." "America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development," OpenAI wrote. It's not the first time OpenAI, which has trained many of its models on openly available web data, often without the data owners' knowledge or consent, has argued for more permissive laws and regulations around AI training. Last year, OpenAI said in a submission to the U.K.'s House of Lords that limiting AI training to public domain content "might yield an interesting experiment, but would not provide AI systems that meet the needs of today's citizens." The content owners who've sued OpenAI for copyright infringement will no doubt take issue with the company's latest reassertion of this stance.
[4]
Google calls for weakened copyright and export rules in AI policy proposal | TechCrunch
Google, following on the heels of OpenAI, published a policy proposal in response to the Trump Administration's call for a national "AI Action Plan." The tech giant endorsed weak copyright restrictions on AI training, as well as "balanced" export controls that "protect national security while enabling U.S. exports and global business operations." "The U.S. needs to pursue an active international economic policy to advocate for American values and support AI innovation internationally," Google wrote in the document. "For too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership -- a dynamic that is beginning to shift under the new Administration." One of Google's more controversial recommendations pertains to the use of IP-protected material. Google argues that "fair use and text-and-data mining exceptions" are "critical" to AI development and AI-related scientific innovation. Like OpenAI, the company seeks to codify the right for it and rivals to train on publicly available data -- including copyrighted data -- largely without restriction. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders," Google wrote, "and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation." Google, which has reportedly trained a number of models on public, copyrighted data, is battling lawsuits with data owners who accuse the company of failing to notify and compensate them before doing so. U.S. courts have yet to decide whether fair use doctrine effectively shields AI developers from IP litigation. In its AI policy proposal, Google also takes issue with certain export controls imposed under the Biden Administration, which it says "may undermine economic competitiveness goals" by "imposing disproportionate burdens on U.S. cloud service providers." That contrasts with statements from Google competitors like Microsoft, which in January said that it was "confident" it could "comply fully" with the rules. Importantly, the export rules, which seek to limit the availability of advanced AI chips in disfavored countries, carve out exemptions for trusted businesses seeking large clusters of chips. Elsewhere in its proposal, Google calls for "long-term, sustained" investments in foundational domestic R&D, pushing back against recent federal efforts to reduce spending and eliminate grant awards. The company said the government should release data sets that might be helpful for commercial AI training, and allocate funding to "early-market R&D" while ensuring computing and models are "widely available" to scientists and institutions. Pointing to the chaotic regulatory environment created by the U.S.' patchwork of state AI laws, Google urged the government to pass federal legislation on AI, including a comprehensive privacy and security framework. Just over two months into 2025, the number of pending AI bills in the U.S. has grown to 781, according to an online tracking tool. Google cautions the U.S. government against imposing what it perceives to be onerous obligations around AI systems, like usage liability obligations. In many cases, Google argues, the developer of a model "has little to no visibility or control" over how a model is being used and thus shouldn't bear responsibility for misuse. Historically, Google has opposed laws like California's defeated SB 1047, which laid out clearly what would constitute reasonable precautions an AI developer should take before releasing a model and in which cases developers might be held liable for model-induced harms. "Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging," Google wrote. Google also called disclosure requirements like those being contemplated by the EU "overly broad," and said the U.S. government should oppose transparency rules that require "divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models."
[5]
OpenAI wants to trade gov't access to AI models for fewer regulations
The company pitches its proposal as a way to counter China's AI advances. Google makes a similar argument for weakening copyright restrictions. OpenAI wants the government to review its AI models -- in exchange for a break from state AI regulations. On Thursday, the company released a 15-page policy advisory in response to the Trump administration's request for input, which will inform the administration's forthcoming AI Action Plan. OpenAI offered to voluntarily let the federal government review its models in exchange for being exempted from state-specific regulations. The company positioned its proposal as a way to counter China's AI advances because of how it would allow American companies to speed ahead in AI. Also: Generative AI is finally finding its sweet spot, says Databricks chief AI scientist "We propose a holistic approach that enables voluntary partnership between the federal government and the private sector," the company said. The Trump administration's forthcoming policy will replace former President Biden's AI executive order and related efforts, which Trump rescinded on his first day in office. The administration has carried out related firings and funding cuts in recent weeks. While AI policy remains unclear at the federal level, individual states have been exploring their own legislation, which OpenAI's advisory called "overly burdensome." Much of this legislation deals with data privacy. Interestingly, Chris Lehane, OpenAI's vice president of global affairs, told Bloomberg that he thinks the US AI Safety Institute -- created under Biden's executive order -- could be the liaison between the government and private sector. This proposal, if formalized, would change the current course of the Institute, which has been a rumored target for layoffs and funding cuts under Trump in recent weeks. The Biden-appointed head of the Institute, Elizabeth Kelly, stepped down shortly after Trump took office. Lehane's comments echo voluntary agreements the Biden administration had previously brokered with AI companies and indicate renewed interest from the private sector in being regulated by the Institute (or an equivalent at the federal level). It's unclear how that would fit into the Trump administration's efforts to deregulate AI across the board. OpenAI also proposed "digitizing government data currently in analog form" in order to make it "machine-readable." The company said this "could help American AI developers of all sizes, especially those working in fields where vital data is often government-held." Also: This new AI benchmark measures how much models lie "In exchange, developers using this data could work with governments to unlock new insights that help develop better public policies. For example, government agencies can build on the work of the US National Archives and Records Administration in using Optical Character Recognition for text searchability and AI-driven metadata tagging," the proposal continued. The proposal also called for changes to US copyright law to "avoid forfeiting our AI lead to the PRC [People's Republic of China] by preserving American AI models' ability to learn from copyrighted material." OpenAI has been sued in many instances over copyright infringement, most notably by publishers like The New York Times, authors, and artists. On the same day OpenAI made its request, Google published a similar set of requests for lessening copyright law. Also: Anthropic quietly scrubs Biden-era responsible AI commitment from its website "For too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership -- a dynamic that is beginning to shift under the new Administration," Google stated. This sentiment is consistent with several tonal and literal changes major AI companies have made away from safety concerns and regulations in recent months. Both policy proposals follow recent expanded partnerships between AI companies and the US government, including allowing the National Labs to test frontier models and Project Stargate.
[6]
OpenAI and Google ask the government to let them train AI on content they don't own
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI "is a matter of national security." The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump's "AI Action Plan." The initiative is supposed to "enhance America's position as an AI powerhouse," while preventing "burdensome requirements" from impacting innovation. In its comment, Open claims that allowing AI companies to access copyrighted content would help the US "avoid forfeiting" its lead in AI to China, while calling out the rise of DeepSeek. "There's little doubt that the PRC's [People's Republic of China] AI developers will enjoy unfettered access to data -- including copyrighted data -- that will improve their models," OpenAI writes. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over." Google, unsurprisingly, agrees. The company's response similarly states that copyright, privacy, and patents policies "can impede appropriate access to data necessary for training leading models." It adds that fair use policies, along with text and data mining exceptions, have been "critical" to training AI on publicly available data. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation," Google says. Anthropic, the AI company behind the AI chatbot Claude, also submitted a proposal - but it doesn't mention anything about copyrights. Instead, it asks the US government to develop a system to assess an AI model's national security risks and to strengthen export controls on AI chips. Like Google and OpenAI, Anthropic also suggests that the US bolster its energy infrastructure to support the growth of AI. Many AI companies have been accused of ripping copyrighted content to train their AI models. OpenAI currently faces several lawsuits from news outlets, including The New York Times, and has even been sued by well-known names like Sarah Silverman and George R.R. Martin. Apple, Anthropic, and Nvidia have also been accused of scraping YouTube subtitles to train AI, which YouTube has said violates its terms.
[7]
OpenAI wants all the data and for US law to apply everywhere
The rest of the world doesn't think 'fair use' is fair but we should make 'em OpenAI wants the US government to ensure it has access to any data it wants to train GenAI models, and to stop foreign countries from trying to enforce copyright rules against it and other American AI firms. The ChatGPT developer submitted an open letter full of proposals to the White House Office of Science and Technology (OSTP) regarding the Trump administration's AI Action Plan, currently under development. It outlines the company's views on how the White House can support the US AI industry. This includes putting in place a regulatory regime - but one that "ensures the freedom to innovate," of course; an export strategy to let America exert control over its allies while locking out enemies like China; and adopting measures to drive growth, including for federal agencies to "set an example" on adoption. The suggestions regarding copyright display a certain amount of hubris. It talks up the "longstanding fair use doctrine" of American copyright law, and claims this is "even more critical to continued American leadership on AI in the wake of recent events in the PRC," presumably referring to the interest generated by China's DeepSeek earlier this year. America has so many AI startups because the fair use doctrine promotes AI development, OpenAI says, while "rigid copyright rules are repressing innovation and investment," in other markets, singling out the European Union for allowing "opt-outs" for rights holders. The company previously claimed it would be "impossible" to build top-tier AI models that meet today's needs without using people's copyrighted work. It proposes that the US government "take steps to ensure that our copyright system continues to support American AI leadership," and that it shapes international policy discussions around copyright and AI, "to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress." Not content with that, OpenAI wants the US government to actively assess the level of data available to American AI firms and "determine whether other countries are restricting American companies' access to data and other critical inputs." Dr Ilia Kolochenko, CEO at ImmuniWeb and an Adjunct Professor of Cybersecurity at Capitol Technology University in Maryland, expressed concern over OpenAI's proposals. "Arguably, the most problematic issue with the proposal - legally, practically, and socially speaking - is copyright," Kolochenko told The Register. "Paying a truly fair fee to all authors - whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors - will probably be economically unviable," he claimed, as AI vendors "will never make profits." Advocating for a special regime or copyright exception for AI technologies is a slippery slope, he argues, adding that US lawmakers should regard OpenAI's proposals with a high degree of caution, mindful of the long-lasting consequences it may have on the American economy and legal system. OpenAI also proposes maintaining the three-tiered AI diffusion rule framework, but with some alterations to encourage other nations to commit "to deploy AI in line with democratic principles set out by the US government." The stated aim of this strategy is "to encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage." OpenAI talks of expanding market share in Tier I countries (US allies) through the use of "American commercial diplomacy policy," banning the use of China-made equipment (think Huawei) and so on. The company also proposes "AI Economic Zones" to be created in America by local, state, and the federal government together with industry, which sounds similar to the UK government's "AI Growth Zones." These will be intended to "speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors," and would allow exclusions from the National Environmental Policy Act, which requires federal agencies to evaluate the environmental impacts of their actions. Finally, OpenAI proposes that federal agencies should "lead by example" on AI adoption. Uptake in federal departments and agencies remains "unacceptably low," the company says, and wants to see the "removal of known blockers to the adoption of AI tools, including outdated and lengthy accreditation processes, restrictive testing authorities, and inflexible procurement pathways." ®
[8]
OpenAI and Google ask for a government exemption to train their AI models on copyrighted material
They also advocate for broad government adoption of AI tools. OpenAI is calling on the Trump administration to give AI companies an exemption to train their models on copyrighted material. In a blog post spotted by The Verge, the company this week published its response to President Trump's AI Action Plan. Announced at the end of February, the initiative saw the White House seek input from private industry, with the goal of eventually enacting policy that will work to "enhance America's position as an AI powerhouse" and enable innovation in the sector. "America's robust, balanced intellectual property system has long been key to our global leadership on innovation. We propose a copyright strategy that would extend the system's role into the Intelligence Age by protecting the rights and interests of content creators while also protecting America's AI leadership and national security," OpenAI writes in its submission. "The federal government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the [People's Republic of China] by preserving American AI models' ability to learn from copyrighted material." In the same document, the company recommends the US maintain tight export controls on AI chips to China. It also says the US government should broadly adopt AI tools. Incidentally, OpenAI began offering a version of ChatGPT designed for US government use earlier this year. This week, Google also published its own list of recommendations for the president's AI Action Plan. Like OpenAI, the search giant says it should be able to train AI models on copyrighted material. "Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances," Google writes. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation." Last year, OpenAI said it would be "impossible to train today's leading AI models without using copyrighted materials." The company currently faces numerous lawsuits accusing it of copyright infringement, including ones involving The New York Times and a group of authors led by George R.R. Martin and Jonathan Franzen. At the same time, the company recently accused Chinese AI startups of trying to copy its technologies.
[9]
OpenAI seeks Trump administration's help in AI fair use debate
The big picture: With at least one court ruling that AI training does not qualify as fair use, OpenAI is looking to the Trump administration's upcoming AI Action Plan to help resolve ongoing copyright disputes. Due out in July, the plan could potentially classify AI training as fair use, granting AI companies unrestricted access to critical training data. OpenAI argues that such a move is essential for the US to maintain its competitive edge in the AI race against China. US courts are currently grappling with whether AI training qualifies as fair use, with rights holders arguing that AI models pose a market threat by potentially replacing them and diluting creative output. OpenAI and several other AI companies are embroiled in multiple lawsuits, contending that AI transforms copyrighted works rather than serving as a direct substitute. However, a landmark ruling has already favored rights holders, with a judge determining that AI training does not constitute fair use, citing its direct threat to Thomson Reuters' legal research firm, Westlaw. The debate is also unfolding internationally, as countries seek to balance copyright protections with the growing demand for AI training. OpenAI asserts that its models do not replicate copyrighted works for public consumption but instead extract patterns and insights to generate new and distinct content. The company argues that this approach aligns with the fundamental principles of copyright and fair use doctrine. "OpenAI's models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights," OpenAI explained. "This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works." During a public comment period, OpenAI urged the US to shift its copyright strategy to promote the AI industry's "freedom to learn," warning that restricting American companies from accessing copyrighted data - while Chinese firms face no such limitations - could cost the US its AI leadership. "The federal government can both secure Americans' freedom to learn from AI and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material," OpenAI stated. The company also called for legal protections for AI firms, citing the strain caused by a patchwork of state regulations. As of 2025, legislative tracker MultiState has flagged 832 AI-related laws. OpenAI warned that mirroring the European Union's strict regulatory approach could stifle innovation, impose burdensome compliance costs, and weaken economic competitiveness and national security. Instead, it proposed a federal law that preempts state regulations, offering a voluntary public-private partnership where AI companies share industry knowledge in exchange for liability protections. OpenAI further urged the US to lead global discussions on copyright and AI to prevent less innovative countries from imposing restrictive legal frameworks on American firms. This includes assessing data availability and ensuring US companies retain access to critical training inputs. With China's rapid AI advancements, such as the open-sourced DeepSeek model, OpenAI cautioned that the US lead in AI is narrowing and requires urgent action to maintain. Additionally, the company stressed that national security depends on unfettered access to AI training data and called for a balanced approach that fosters innovation while safeguarding intellectual property rights. As the US Copyright Office prepares to release further guidance on AI training in its upcoming report, the stakes are high for both AI firms and copyright holders.
[10]
OpenAI Calls on U.S. Government to Let It Freely Use Copyrighted Material for AI Training
OpenAI, known for its ChatGPT chatbot, today submitted AI recommendations to the Trump administration, calling for deregulation and policies that give AI companies free rein to train models on copyrighted material in order to compete with China on AI development. AI companies cannot freely innovate while having to comply with "overly burdensome state laws," according to OpenAI. The company claims that laws regulating AI are "easier to enforce" with domestic companies, imposing compliance requirements that "weaken the quality and level of training data available to American entrepreneurs." OpenAI suggests that the government provide "private sector relief" from 781+ AI-related bills introduced in various states. OpenAI outlines a "copyright strategy" that would preserve "American AI models' ability to learn from copyrighted material." OpenAI argues that AI models should be able to be trained freely on copyrighted data, because they are "trained not to replicate works for consumption by the public" and thus align with the fair use doctrine. With its AI copyright laws, OpenAI says that the European Union has repressed AI innovation and investment. OpenAI claims that if AI models are not provided with fair use access to copyrighted data, the "race for AI is effectively over" and "America loses." OpenAI asks that the government prevent "less innovative countries" from "imposing their legal regimes on American AI firms." For AI data sharing, OpenAI suggests a tiered system that would see AI tech shared with countries that follow "democratic AI principles," while blocking access to China and limiting access to countries that might leak data to China. The company also suggests government investment in utilizing AI technology and building out AI infrastructure. The use of copyrighted material for AI training has angered artists, journalists, writers, and other creatives who have had their work absorbed by AI. The New York Times, for example, has sued Microsoft and OpenAI for training AI models on news articles. Many AI tools assimilate and summarize content from news sites, driving users away from primary sources and oftentimes providing incorrect information. Image generation engines like Dall-E and Midjourney have been trained on hundreds of millions images scraped from the internet, leading to lawsuits. OpenAI has submitted its proposals to the Office of Science and Technology Policy for consideration during the development of a new AI Action Plan that is meant to "make people more productive, more prosperous, and more free." The full text is available on OpenAI's website.
[11]
OpenAI Warns US to Let It Train on Copyrighted Material or China Wins AI Race
OpenAI has urged the US government to give the company unrestricted access to copyrighted material to train its AI models and points to China as the reason why it should escape copyright laws. OpenAI is asking the US government to make it easier for AI companies to learn from copyrighted material, citing a need to "strengthen America's lead" globally in advancing the technology. Open AI, the start-up behind ChatGPT and DALL-E, submitted the suggestions to the U.S. government on Thursday as part of President Donald Trump's upcoming "AI Action Plan." President Trump ordered his administration advisors to formulate such a plan earlier this year and has asked for input from the private sector, government, and academia in the U.S. In a 15-page letter to the US government on Thursday, OpenAI urged the federal government to enact a series of "freedom-focused" policy ideas that allow the companies to train its models on copyrighted material -- including an approach that would no longer compel American AI developers to "comply with overly burdensome" state-level AI bills in the U.S. OpenAI says that if it is not able to train its models on copyrighted material, China will take the lead in the AI race. OpenAI described DeepSeek's latest model, R1, as a "noteworthy" advancement, highlighting China's expanding AI ambitions and the increasing competition between the two countries. "While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing," Chris Lehane, OpenAI's vice president of global affairs, writes in a letter to the Office of Science and Technology Policy. OpenAI did not specify how it would achieve this but stated that promoting "fair use" policies and reducing intellectual property restrictions could help "[protect] the rights and interests of content creators while also protecting America's AI leadership and national security." The use of copyrighted material in AI training remains highly controversial, as many companies continue to train models on human-created content without consent or compensation.
[12]
AI firms say copyright limits hurt U.S. security
Driving the news: OpenAI and Google argue in their filings that being able to use copyrighted material is a matter of national security, saying that if they can't train on this material, Chinese companies will have an unfair advantage. Yes, but: Fair use permits limited use of copyrighted material without permission, but its role in AI training is at the center of the legal disputes. The other side: Groups representing actors, filmmakers and publishers (among other creative professionals) used their filings, public statements and editorials to reject those arguments. Zoom in: In its filing, startup Vermilio makes its own case for maintaining copyright protection. The company, which focuses on monetizing protected IP uses OpenAI's ChatGPT to help bolster its case. The big picture: Publishers, writers, artists and others have filed suit against OpenAI, Microsoft, Google and other companies arguing their training and operation of generative AI systems violates intellectual property law. My thought bubble: There are options beyond not having access to copyright material or having totally free rein.
[13]
Why OpenAI's copyright plan will impact you more than you think
OpenAI is inconsistent in a lot of things -- is it a non-profit or a for-profit? Is Sam Altman fit to be CEO or not? But one thing the company has always been consistent about is its belief that it requires access to copyrighted material for AI training. Now, despite the many voices that disagree, OpenAI wants the U.S. government to approve such unrestricted access by ruling it as "fair use." The company argues that the U.S. will fall behind China in the AI race if companies don't have the freedom to train their models on copyrighted material -- claiming that "overly burdensome state laws" will slow the process and affect results. Recommended Videos If you're a creator, this could impact you, too. Artists, writers, programmers, photographers, and filmmakers with online portfolios, for example, all own their work but if this plan goes through, you'll have no grounds to complain when your content is used to train AI. Even more physical creative pursuits like fashion design, jewelry-making, or sculpting aren't safe if you post photos of your work online. It seems like a cruel joke that OpenAI wants AI training to count as "fair" use of copyrighted work when the products it develops will be used to generate new mangled versions of personal creations. A particularly direct example of this happened just last month when the French cast of Apex Legends was reportedly asked to participate in training an AI model that would eventually be used to generate voice lines for the game. There are a lot of commercial uses for various kinds of creative content but the better AI models get at mimicking it, the harder it will be to make money as a creator. Companies have quite the track record of choosing the least expensive option in any situation, and there's little reason to believe this will change any time soon. It's hard to imagine what the solution will be for creators in this situation. Individuals who really care about protecting their work might start password-protecting their online portfolios, sacrificing just a few old examples to the training sets, and only sharing the rest upon human request. There would definitely be demand for a proper solution too -- some kind of new portfolio or creative sharing platform that only humans can access. It would need to have a pretty hardcore authentication process but there are definitely people out there who care enough about this to sacrifice some convenience. The White House hasn't responded to OpenAI's plan yet, so we'll have to wait and see how this develops.
[14]
OpenAI Says It's "Over" If It Can't Steal All Your Copyrighted Work
Image by Sebastian Gollnow / picture alliance via Getty / Futurism OpenAI says the US will lose the AI race if it's unable to scrape copyrighted materials -- and its favorite bogeyman, China, will take the crown instead. As Ars Technica reports, the Sam Altman-led company is begging president Donald Trump to instate federal regulations defining "fair use," the thorny standard at the heart of the copyright lawsuits lobbed against OpenAI by The New York Times and other companies. This policy proposal to the White House's Office of Science and Technology comes amid increasing momentum on the state level to regulate AI. In the lengthy document the president probably will not read himself, OpenAI said that it could not compete with China -- which the company insists on calling the People's Republic of China, or PRC for short, throughout -- if regulations stymie AI access to copyrighted works for training data. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over," the company wrote in its policy proposal. "America loses, as does the success of democratic AI." Though OpenAI insists that using copyrighted materials will help it "ensure more access to more powerful innovations that deliver even more knowledge," publishers chagrined at their work being fed into AI training data disagree -- especially when those models spit out straight-up plagiarized outputs. In the same statement, the AI giant twisted the long-established "fair use doctrine," a legal framework allowing limited access to copyrighted materials without prior permission for quotations in articles and other normal, non-infringing usages. OpenAI also suggested that somehow, being unable to scrape copyrighted works is a matter of "national security." "Applying the fair use doctrine to AI is not only a matter of American competitiveness -- it's a matter of national security," the company insisted in its proposal. "The rapid advances seen with the PRC's DeepSeek, among other recent developments, show that America's lead on frontier AI is far from guaranteed." Ironically, OpenAI has, as Ars notes, accused DeepSeek of improperly using its data without permission -- a point the company failed to bring up in its policy proposal, probably because it feels embarrassingly hypocritical. "The federal government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material," OpenAI insisted. As of now, it's unclear whether Trump will fall for OpenAI's gambit. If Altman and his ilk butter him up enough, however, there's a good chance he might side with them -- at the expense of copyright holders the world around.
[15]
Google Joins OpenAI in Asking Trump to Relax AI Copyright Restrictions - Decrypt
Google has joined OpenAI in urging President Donald Trump's administration to ease restrictions on AI training, particularly around the use of copyrighted materials. Both tech giants submitted policy proposals on Thursday, advocating for a more flexible approach to artificial intelligence regulations as the U.S. government prepares its "AI Action Plan" by mid-2025. The proposals submitted by Google and OpenAI are in response to President Trump's new focus on artificial intelligence regulation, following his revocation of the previous administration's AI executive order in January 2025. OpenAI and Google argue that loosening intellectual property barriers is essential to fostering innovation and maintaining U.S. leadership in AI development. "Fair use and text-and-data mining exceptions" are critical for continued AI research, Google said, noting how strict copyright policies could hinder progress in artificial intelligence fields, including healthcare, scientific discovery, and economic development. In its proposal, Google welcomed the Trump administration's focus on artificial intelligence, saying the goal of developing a plan to "sustain and enhance America's global AI dominance" is crucial for U.S. leadership. In January, Trump unveiled a $500 billion Stargate Project, aimed at strengthening U.S. AI infrastructure, which includes OpenAI among major contributors, alongside other tech giants like Microsoft and Oracle. The call for relaxed copyright laws comes as both Google and OpenAI face numerous lawsuits over the use of copyrighted material in training their artificial intelligence models. OpenAI, for example, is currently embroiled in several high-profile lawsuits filed by authors and publishers accusing the company of using their copyrighted works without permission. Authors like Sarah Silverman and George R. R. Martin, along with other well-known figures, have joined forces to challenge OpenAI's use of their writings in training models like ChatGPT. Similarly, Google has faced accusations of using copyrighted content to train its AI models, including its AI-powered tools like YouTube's music recommendation system, which was shelved due to copyright concerns. Apart from copyright issues, both companies raised concerns about the fragmented state-level regulations that currently govern AI in the U.S. With more than 780 AI-related bills being considered at the state level, Google warned that the lack of a unified federal approach could create compliance chaos and stifle innovation. Google has called for a cohesive federal policy that sets a clear framework for artificial intelligence development, so that companies can operate across state lines without facing conflicting regulations. "While America currently leads the world in AI -- and is home to the most capable and widely adopted artificial intelligence models and tools -- our lead is not assured," Google warned, quoting Vice President Vance's remarks at the Artificial Intelligence Action Summit in Paris, France. Trump's new executive order prioritizes maintaining U.S. global dominance in artificial intelligence and mandates that a comprehensive "AI Action Plan" be presented to the president within 180 days.
[16]
If you don't let us scrape copyrighted content, we will lose out to China says OpenAI as it tries to influence US government
The company also tries to frame its scraping as covered by the 'fair use doctrine'. The Trump administration is still asking for public comment on their AI Action plan. And, wouldn't you know it? OpenAI has more than a few thoughts it would like to share with the US government. Namely, it would quite like its AI products to continue to be allowed to scrape copyrighted material, please and thank you. Ahead of the March 15 deadline, OpenAI set out a number of proposals for the US government, which the company also shared in summary on its public blog. The point that stands out to me is titled "A copyright strategy that promotes the freedom to learn," which encourages the US government to "avoid forfeiting our AI lead to the [People's Republic of China] by preserving American AI models' ability to learn from copyrighted material." OpenAI, particularly ChatGPT, is no stranger to gobbling up copyrighted material as training data, with the company arguing last year there's just no way around it. The submitted proposal argues that OpenAI's models are not fully replicating copyrighted material for public consumption but are instead learning "patterns, linguistic structures, and contextual insights" from the works. OpenAI makes the case that, therefore, its "AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works." OpenAI's proposal also broadly casts a dim view on AI legislation currently being discussed outside of the US. For example, OpenAI's proposal criticises the EU and UK's opt-out provisions for copyright holders, claiming, "Access to important AI inputs is less predictable and likely to become more difficult as the EU's regulations take shape. Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer entrants with limited budgets." I'm personally not buying what OpenAI is selling here; the company's 'fair use' argument largely sidesteps the point that, to build its AI models, copyrighted material has still been taken without the copyright holder's permission, and OpenAI has profited off of using copyrighted material as training data. This also isn't some plucky young creator repurposing big IP to create a genuinely transformative work, this is a multi-billion dollar company hoovering up the work of creatives both big and small to fuel a 'yes, and' machine that is neither funny nor smart-and don't even get me started on the currently in-development 'creative writing' model churning out purple prose. The proposal goes on to claim, "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI. Ultimately, access to more data from the widest possible range of sources will ensure more access to more powerful innovations that deliver even more knowledge." In the wake of DeepSeek going open-source, OpenAI is evidently feeling the pressure. Despite being developed at a fraction of the cost, the China-based AI model's performance is comparable to OpenAI's own ChatGPT-so much so that there were suspicions that DeepSeek may have copied the homework of OpenAI's models. The Trump administration will likely be interested in a number of OpenAI's proposals, given that the current government is decidedly all-in on AI. Besides nixing the Biden presidency's Executive Order 14110, which aimed to put some safety guardrails around the development of AI, there's the 'Stargate' AI infrastructure project. In a bid to support this AI vision with homegrown silicon, there was also that announcement of an eye-watering $100 billion investment to bring TSMC's operations stateside, though that's still under review by the Taiwanese government. Still, even without TSMC's most advanced tech, AI looks like it will have more than a toehold in the US.
[17]
OpenAI urges U.S. to allow AI models to train on copyrighted material
OpenAI CEO Sam Altman speaks at the White House on Jan. 21 alongside President Donald Trump, SoftBank CEO Masayoshi Son and Oracle Chairman Larry Ellison, right.Jim Watson / AFP via Getty Images file OpenAI is asking the U.S. government to make it easier for AI companies to learn from copyrighted material, citing a need to "strengthen America's lead" globally in advancing the technology. The proposal is part of a wider plan that the tech company behind ChatGPT submitted to the U.S. government on Thursday as part of President Donald Trump's coming "AI Action Plan." The administration solicited input from interested parties across the private sector, government and academia, framing the future policy as a shift that would "prevent unnecessarily burdensome requirements from hindering private sector innovation." In its proposal, OpenAI urged the federal government to enact a series of "freedom-focused" policy ideas, including an approach that would no longer compel American AI developers to "comply with overly burdensome state laws." Copyright in particular is an issue that has plagued AI developers, as many continue to train their models on human work without informing the original creators, obtaining consent or providing compensation. OpenAI has been sued by several news outlets including the Center for Investigative Reporting, The New York Times, the Chicago Tribune and the New York Daily News over claims of copyright infringement. Several authors and visual artists have also taken legal action against the company over unauthorized use of their copyrighted content. Still, OpenAI said it believes its strategy -- the encouragement of "fair use" policies and fewer intellectual property restrictions -- could "[protect] the rights and interests of content creators while also protecting America's AI leadership and national security." It did not elaborate on the former. Many leaders in the AI industry and members of the Trump administration have framed America's dominance in AI advancements as a matter of national security, comparing it to a high-stakes arms race. "The federal government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material," OpenAI's proposal states, using an abbreviation for China's formal name, the People's Republic of China. Shortly after he took office, Trump issued an executive order that revoked former President Joe Biden's policies on AI, stating the United States' previous directives acted "as barriers to American AI innovation." Biden's "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" executive order, issued in October 2023, stated that "irresponsible use [of AI] could exacerbate societal harms," including threats to national security. His first week in office, Trump also announced Stargate, a massive AI infrastructure investment venture unveiled at the White House in partnership with OpenAI, Oracle and SoftBank. Executives of those companies pledged to invest an initial $100 billion and up to $500 billion over the next four years in the project, which will be set up as a separate company. OpenAI called for more investment into the technology in its proposal, writing, "Sustaining America's lead on AI means building the necessary infrastructure to compete with the PRC and its commandeered resources." This investment in AI infrastructure, it wrote, would create jobs, boost local economies, modernize the country's energy grid and prepare "an AI-ready workforce." Executives at OpenAI told reporters last month that as part of the ambitious project, the company is considering constructing new data center campuses in 16 states, CNBC reported. OpenAI also encouraged the government to focus on exporting American "democratic AI" to promote the adoption of U.S. technology abroad. OpenAI says this would start with adopting AI tools within the U.S. government as well. (The company previously launched ChatGPT Gov in January, a version of ChatGPT built specifically for government use.) The proposal directly points to DeepSeek R1 -- the AI model recently released by a small Chinese lab that temporarily took ChatGPT's No. 1 spot in the Apple App Store, became the talk of Silicon Valley and caused tech stocks to crash -- as a threat to the United States' global leadership on AI. "While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing," the company said.
[18]
US AI Action Plan: Do OpenAI and Google want more for less?
OpenAI is proposing advancing US AI through looser regulations while placing higher restrictions on China. Late last month, the US government launched a public consultation, inviting policy ideas for its new Artificial Intelligence (AI) Action Plan. The tech strategy originates from president Donald Trump's AI executive order revoking former president Joe Biden's 2023 order which aimed to create safeguards around the technology's development. With just days before the deadline for the consultation comes to a close, OpenAI and Google have both shared their proposals for the new AI policy, and both are pointing towards the same direction - less regulations. Looser reins on copyright According to Google, copyright and privacy laws can "impede appropriate access to data", which it deems necessary for training leading AI models. "Balanced" copyright rules such as fair use and text and data mining exceptions "have been critical to enabling AI systems to learn from prior knowledge", the company says. It has also asked the US government to create a federal privacy regulatory framework to differentiate between publicly available data, which it can use, as opposed to personally identifying data. Meanwhile OpenAI wants a copyright strategy that "promotes the freedom to learn". "We propose a copyright strategy that would extend the system's role into the 'Intelligence Age' by protecting the rights and interests of content creators while also protecting America's AI leadership and national security," it says. OpenAI says that its models are trained not to replicate work, but rather to extract patterns, linguistic structures and contextual insights. "This means our AI model training aligns with the core objectives of copyright and the fair use doctrine," it argues. Moreover, OpenAI says that European Union's 'text and data mining exceptions' for AI training, as well as the ability for a copyright holder to opt-out of providing access for training will hinder AI innovation. "Access to important AI inputs is less predictable and likely to become more difficult as the EU's regulations take shape," it adds. Both companies are, however, facing a number of copyright-related lawsuits. Late last month, US edtech company Chegg in a lawsuit alleged that Google's AI Overviews severely hurt its online traffic, "materially impacting" the company's revenue and employees. While in a legal battle launched by the New York Times against OpenAI in 2023, the publisher claims that AI models such as ChatGPT have copied and used millions of its copyrighted news articles, in-depth investigations and other journalistic work. More investments, more energy OpenAI proposes that the Trump administration "ensure that sufficient capital flows into building AI infrastructure in the US". In its proposal, the start-up wants the government to consider investment vehicles like a sovereign wealth fund, government guarantees to adopt the technology, as well as tax credits and loans to provide AI companies with "credit enhancement". Last month, Trump signed an executive order directing the US treasury and commerce departments to create a US sovereign wealth fund. OpenAI is already benefiting from a Trump-proposed $500bn joint venture from private investors to develop its infrastructure over the next four years. Meanwhile Google is asking for investments into AI to be "significantly" bolstered, "with a focus on speeding funding allocations". "Lowering barriers to entry will ensure that the American research community remains keenly focused on innovation rather than struggling with resource acquisition," it says. "For too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership." In addition, both companies claim that a lack of new energy supplies is a key constraint to expanding AI infrastructure. Google says that the current US energy infrastructure and permitting processes "appear inadequate". While OpenAI proposes a new legislation which would expand transmission, fibre connectivity and natural gas pipeline construction. The start-up wants the government to streamline the planning, permission and payment processes to "significantly speed up infrastructure projects". OpenAI is pitting itself as anti-China The launch of DeepSeek AI's reasoning model R1 shook the global AI community, further ramping up the AI race while pitting China as a firm competitor against the US. In its proposal, OpenAI wants the US to loosen up its laws and regulations and ramp up investments into AI, with a dual goal of maintaining American leadership in the tech while thwarting China from achieving success. "As America's world-leading AI sector approaches artificial general intelligence, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump administration's new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI." It is proposing that the US consider a tiered framework to differentiate between countries that will commit to America's "democratic AI principles" and China along with a "small cohort of countries" which would be prohibited from accessing US AI systems. In the proposal, OpenAI also wants the Trump administration to "coordinate global bans on CCP-aligned AI infrastructure, including Huawei chips". The start-up calls DeepSeek "state-subsidised (by China) and state-controlled, and argues that building on top of DeepSeek models in US critical infrastructure brings "significant risk". While Google wants the US to look into patents which were "granted in error". The company says that China's overall US patent grants grew by more than 30pc just last year - more than any other country. Citing a 2013 study, it argues that the US Patent and Trademark Office could have a nearly 40pc error rate when it comes to approving software-related technologies. The AI Action Plan is expected to be drafted by the US Office of Science and Technology Policy and submitted to president Trump by July. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[19]
Big Tech's AI pitch seeks license to steal | Opinion
OpenAI and Google, having long trained their ravenous bots on the work of newsrooms like ours at the Lake County Record-Bee, now want to throw out long established copyright law by arguing, we kid you not, that the only way for the United States to defeat the Chinese Communist Party is for those tech giants to steal the content created with the sweat equity of America's human journalists. "With a Chinese Communist Party determined to overtake us by 2030," OpenAI wrote Thursday to the federal Office of Science and Technology Policy, "the Trump administration's new action plan can ensure that American-led A.I. built on democratic principles can prevail over CCP-built, autocratic, authoritarian AI." Built on democratic principles? More like built on outright theft. That's why news organizations, including eight of our sister newspapers and The New York Times, have sued OpenAI and its partner Microsoft over their breaking copyright law by vacuuming up millions of newspaper articles without permission or payment, constituting copyright infringement on a colossal scale. Now OpenAI comes back with the absurd argument that this was somehow necessary for national security. In their letter, Sam Altman's crew added a whole lot of obfuscating, self-serving blather about "scaling human ingenuity" and "freedom of learning and knowledge" while describing the innovations of ChatGPT as part of some great and glorious trajectory from domesticated horses to steam power to electricity to printing presses and the internet. You see the irony there? Printing presses. For generations, those presses sent out the work of America's reporters, the fruits of capital invested, and hard labor performed, in city halls and crime scenes and throughout all the communities they served. They amplified and distributed a news organization's work, as now does the internet. They didn't steal the work of someone else and then pass it off as their own. Gutting generations of copyright protections for the benefit of AI bots would have a chilling effect not just on news organizations but on all creative content creators, from novelists to playwrights to poets. That ironclad commitment to protecting the rights of owners of work they themselves created is precisely what distinguishes the United States from communist China, not the reverse. This country has dominated the world of news and information by respecting not just the precious freedom of the press but also its right to protect its work. Had it not done so, there would have been no economic base on which to build the kinds of news organizations that can, and still do, keep a check on the government. Heck, there would have been no economic basis to build anything creative whatsoever. Securing permission from, and fairly compensating, those publishers who created this great foundation of knowledge is the right, just and American thing to do. The government should reject these self-serving proposals and protect the work of artists, authors, photographers, journalists and all other creators and copyright holders who have been the victims of these companies.
[20]
OpenAI Wants Unrestricted Access To Copyrighted Material To Train Its Artificial Intelligence Models; Has Sought The Help Of The U.S. Government To Make This Possible
Artificial general intelligence, or AGI, appears to be OpenAI's 'magnum opus' and it is forming what it likely believes is a well-constructed plan that will allow the company behind the chatbot ChatGPT to achieve its goal, but the journey will be riddled with the controversies. For instance, the latest recommendations that the firm has sent to the U.S. government summarizes that it wishes to leverage copyrighted material to train its models to not just develop AGI but also to compete with China. Some proposals have been sent to the Trump administration and in a nutshell, OpenAI wants free reign that will allow it to sweep through copyrighted material without any checks and balance. This is most certainly a massive ask, and the AI firm could have its ace up its sleeve that might convince the U.S. government to yield to the company's terms; increasing the lead against China in the AI race. Even though OpenAI believes that America is ahead of its rival in this area, this advantage could evaporate soon because DeepSeek has the benefit of 'copyright arbitrage being created by democratic nations that do not clearly protect AI training by statute, like the US, or that reduce the amount of training data through an opt-out regime for copyright holders, like the EU.' "While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans." OpenAI claims that if AI models are not provided with fair use access to copyrighted data, the 'race for AI is effectively over' and 'America loses.' The company also mentions the possibility that 'less innovative countries' cannot be stopped from 'imposing their legal regimes on American AI firms' if OpenAI continues to be stifled by state laws. There has been no word on whether the Trump administration has taken a gander at the AI Action Plan, but it is sure to cause a firestorm in the coming days. After all, OpenAI has been the subject of controversy, as writers, artists, and other creative professionals have expressed their sheer anger that artificial intelligence is using their talent and efforts with impunity. The New York Times has also taken Microsoft and OpenAI to court for training its models on the publication's news articles. There is obviously no pleasant outcome in whatever decision the U.S. government takes, because letting AI run rampant and allowing China to supersede it technologically are two threats that the country wishes to avoid. Should there be any development in this regard, we will update our readers, so stay tuned.
[21]
Creators vs. AI : The Battle Over Intellectual Property Rights
Artificial intelligence (AI) is transforming industries at an extraordinary pace, but its rapid evolution has sparked intricate debates surrounding copyright and intellectual property. At the heart of these discussions lies the training of AI models, which rely on vast datasets often containing copyrighted material. Companies like OpenAI are advocating for policies that balance the need to foster innovation with the imperative to respect intellectual property rights. The stakes are significant, with potential consequences for creators, businesses, and global leadership in AI development. This isn't just a philosophical dilemma -- it's a pressing issue with real-world stakes. Companies like OpenAI argue that training AI on copyrighted material under fair use is essential for driving innovation and maintaining global competitiveness, particularly against nations like China. On the other hand, creators and copyright advocates worry about the erosion of intellectual property rights and the potential devaluation of original work. The tension between these perspectives highlights a complex, high-stakes challenge that demands thoughtful solutions. So, where do we go from here? AI Grid look deeper into the nuances of this debate and explore what's at stake for creators, businesses, and the future of AI. AI models derive their capabilities from analyzing extensive datasets to identify patterns, generate insights, and produce outputs. These datasets frequently include publicly available content, some of which is protected by copyright. While this approach has driven remarkable advancements in generative AI, it has also raised ethical and legal concerns. Creators argue that their work is being used without consent or compensation, potentially undermining the value of original content. For instance, artists and writers have voiced frustration over AI-generated outputs that closely mimic their creative styles, even when the results are deemed technically fantastic. This tension underscores the urgent need for clearer regulations to govern the use of copyrighted material in AI training. Without such guidelines, the line between innovation and infringement remains blurred, leaving creators vulnerable and the AI industry exposed to legal uncertainties. OpenAI has emerged as a strong advocate for applying fair use principles to AI training. The organization contends that generative AI models do not replicate original works but instead create fantastic outputs by identifying patterns within the training data. According to OpenAI, imposing overly restrictive copyright policies could hinder innovation and jeopardize the United States' position as a global leader in AI development. The organization also highlights the competitive implications of restrictive policies. Countries like China, which face fewer limitations on data usage, could gain a significant advantage in the global AI race. OpenAI warns that limiting access to training data could place the U.S. at a disadvantage, urging policymakers to adopt a balanced approach that supports both technological progress and intellectual property protection. This perspective reflects the broader challenge of fostering innovation while respecting the rights of creators. Below are more guides on AI Copyright from our extensive range of articles. Fair use lies at the core of the copyright debate in AI. Proponents argue that AI-generated outputs are inherently fantastic and serve purposes such as research, education, and innovation. They assert that these uses align with the principles of fair use, which permit limited use of copyrighted material under specific conditions without requiring explicit permission. Critics, however, challenge this interpretation. They argue that the use of copyrighted material in AI training diminishes the value of original works and threatens the livelihoods of creators. Many believe that AI companies should either obtain explicit permission or provide compensation for incorporating copyrighted content into their datasets. This debate reflects a broader tension between advancing technology and safeguarding intellectual property rights, with no clear resolution in sight. The copyright debate extends beyond ethical and legal considerations, influencing global competition in AI development. OpenAI has cautioned that restrictive copyright policies in the U.S. could enable other nations, particularly China, to outpace American innovation. China's relatively unrestricted access to training data positions it as a formidable competitor in the AI sector. To address this challenge, OpenAI advocates for policies that strike a balance between fostering innovation and protecting intellectual property. Policymakers face the difficult task of weighing the risks of stifling technological progress against the need to uphold creators' rights. The outcome of this debate will shape not only the trajectory of AI development but also the broader dynamics of global competition in this rapidly evolving field. The reliance on massive datasets for AI training may not remain a permanent feature of the industry. Emerging technologies, such as "test-time compute," and advancements in algorithms are paving the way for more efficient AI systems. These innovations could reduce the dependence on extensive data collection, shifting the focus from data-heavy approaches to computationally sophisticated methods. This shift holds the potential to address some of the ethical and legal concerns associated with data usage. By prioritizing algorithmic efficiency, the AI industry could continue to innovate while minimizing its reliance on copyrighted material. Such advancements may also encourage the development of new frameworks for ethical AI practices, making sure that progress is achieved without compromising intellectual property rights. The legal landscape for AI training remains uncertain, with ongoing lawsuits challenging the use of copyrighted material in training datasets. The outcomes of these cases could establish critical precedents for the future of AI development. In response to these challenges, some companies, such as Adobe, have adopted ethical data sourcing practices, making sure that their AI models are trained exclusively on licensed or publicly available content. However, many organizations continue to face scrutiny from both the public and legal systems. This highlights the pressing need for clearer guidelines and ethical standards to navigate the complex intersection of AI and intellectual property. Without such measures, the industry risks prolonged uncertainty and potential setbacks in innovation. The intersection of AI and copyright presents a multifaceted challenge that demands thoughtful and collaborative solutions. As AI technologies evolve, the balance between fostering innovation and protecting intellectual property will remain a pivotal issue. Collaboration among policymakers, creators, and AI developers is essential to establish frameworks that promote ethical practices while allowing technological progress. The resolution of this debate will not only shape the future of AI but also influence creative industries and global competition. By addressing these challenges proactively, stakeholders can ensure that AI continues to drive innovation while respecting the rights and contributions of creators.
[22]
OpenAI and Google Share Contrasting Proposals for Regulation in US 'AI Action Plan'
Google and OpenAI share different approaches to AI regulations under President Trump's (in image) administration. Credit: Anna Moneymaker, Getty Images. OpenAI and Google have shared their recommendations for the White House's upcoming "AI Action Plan." Although the two companies are aligned on their overall objectives, they offer differing views on the best way to achieve those goals. National Security and Export Control Both OpenAI and Google highlight the need for an AI policy that protects U.S. national security. However, they take subtly different views on the key issue of export controls. In its proposal, OpenAI leans heavily into anti-China sentiment that could find support with the Trump administration. OpenAI's position calls for a hardline stance prohibiting China and its closest allies from accessing "democratic AI systems." In contrast, Google barely mentions China directly and argues that blanket export controls introduced by the Biden administration risk undermining American competitiveness. Instead, the company seeks a more balanced, targeted approach that would "support legitimate market access for U.S. businesses." Preemption and Regulatory Fragmentation A major concern for Google and OpenAI is the emerging patchwork of state-level AI regulations. Both companies refer to the legal concept of "preemption," which occurs when a higher level of government removes or limits the authority of a lower level of government. OpenAI proposes a "tightly-scoped framework" to give AI companies preemption from state-level laws. However, it falls short of demanding specific legislation. Instead, OpenAI argues for voluntary collaboration and a sandbox environment where companies can experiment without incurring state-level liability. Google, on the other hand, supports federal legislation that would override the state AI laws. Infrastructure and Energy Policies Infrastructure and energy policy is another area where OpenAI and Google share the same goal, but diverge on how to get there. OpenAI's infrastructure strategy is sweeping. It includes specific proposals for a "National Transmission Highway Act" and "AI Economic Zones" to support the American AI sector. Meanwhile, Google agrees that energy is a major limiting factor for AI data centers but frames solutions more generally and does not call for so much direct federal intervention. Workforce Development The proposals from Google and OpenAI emphasize the need to develop the American workforce for AI leadership. Both companies stress the need for AI training, but only Google touches on a topic that is a big focus for the Trump administration: immigration. "Where practicable, U.S. agencies should use existing immigration authorities to facilitate recruiting and retention of experts in occupations requiring AI-related skills," Google's submission to the White House states. OpenAI Out-MAGA's Google On some of the most important discussion points, including intellectual property law and government AI adoption, Google and OpenAI largely align in their views. Even in areas where they disagree on the specifics, the two companies share similar overarching goals. However, when it comes to reading the nation's political mood, OpenAI has taken the lead. By embracing a hardline stance against China and avoiding any reference to immigration, OpenAI's recommendations tap into key themes that could make them more appealing to the Trump administration. To that end, the language of the two documents tells two different stories. While OpenAI mentions "America" or "U.S." 99 times, Google uses those words just 56 times. In contrast, Google makes repeated calls for international engagement and collaboration, which are entirely absent from OpenAI's proposal.
[23]
OpenAI, Google Push for AI Training on Copyrighted Content
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use OpenAI and Google are urging the US government to allow them to train their artificial intelligence (AI) models on copyrighted materials. This comes as part of the two companies' responses to the US government's consultation to formulate the country's AI Action Plan. This plan is supposed to define "priority policy actions" to defend the US's position in the AI space and "prevent unnecessarily burdensome requirements from hindering private sector innovation." Google argues for balanced copyright laws with text and data mining exceptions. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rights holders and avoiding often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation," the company says. OpenAI, on the other hand, highlights the significance of the fair use doctrine in American copyright law. This doctrine allows companies and creators to use copyrighted material in transformative ways. The company explains that the fair use doctrine promotes AI development, stating that its models do not replicate the works they consume but rather learn from them and create something wholly new. OpenAI pushes back against the UK's proposed changes to its copyright regime that allow rights holders to opt out of model training. Such approaches, OpenAI says, act as regulatory barriers to AI development. Why it matters: Given that India currently does not have a clear stance on AI regulation and protections against AI models training on copyrighted materials, international developments can help direct the country's regulatory direction. These developments are also relevant as the Delhi High Court decides the legal battle between OpenAI and the news agency ANI. It is also important to note that while OpenAI and Google believe that their suggested approaches balance copyright protections, artists seem to disagree. According to a report by Variety, a group of 400 actors, filmmakers, and musical artists, including Ben Stiller, Mark Ruffalo, Cynthia Erivo, and Paul McCartney, have written to the US government, specifically responding to OpenAI and Google's submissions. The artists say that the two companies "are arguing for a special government exemption so they can freely exploit America's creative and knowledge industries, despite their substantial revenues and available funds." They urged the government to ensure that AI leadership does not come at the expense of creative industries. Other key points from the companies' submissions: Pushback on DeepSeek: OpenAI focuses its attention on China and DeepSeek. The company highlights how DeepSeek has strategic advantages over democratic AI because as an authoritarian state, China is able to provide the company with the resources it needs to develop models (be it energy, data or technical talent). OpenAI says that just like with Huawei, China will facilitate the adoption of DeepSeek by coercing countries that need AI tools. It is able to skirt through state-level AI regulations in the US because of the difficulties state governments face in implementing regulations on international AI models. US-based companies, on the other hand, will suffer from state-level regulations since they weaken the quality and level of training data available to American entrepreneurs. OpenAI claims that Chinese models are also unlikely to respect the intellectual property regimes of other countries. Export controls: Google urges the government to balance export controls that protect national security while enabling US exports and global business operations. In an effort to protect US AI dominance from being bypassed by Chinese AI models, OpenAI suggests the US government should divide countries into three categories within the global AI market: Federal regulation instead of state-level laws: Both OpenAI and Google argue against "patchwork regulations" at the state level, explaining that these laws slow down the rate of AI development. Google suggests that the federal regulatory framework should focus on frontier models and protect national security while "fostering an environment where American AI innovation can thrive." The company advocates for regulations only for those AI models that pose specific risks, adding that the government should intervene with regulations only if that is "demonstrably necessary." OpenAI proposes a framework for "voluntary partnership between the federal government and the private sector to protect and strengthen American national security." The company claims that the government should work with AI companies to stay informed about AI risks, evaluate American AI technology against competitors, and coordinate the development of technical standards for evaluating and safeguarding frontier models. Delineate roles and responsibilities in AI regulations: In the case of high-risk AI systems, the government should "clearly delineate the roles and responsibilities of AI developers, deployers, and end users," Google says. It argues that the person with the most scope of control over a specific step of the AI lifecycle should bear liability for that step. Discussing developer liability, Google adds that in many cases the developer has no visibility on how a model is deployed or used. As such, developers should not have to take responsibility for downstream risks or for misuse by the end user. Advocating for AI-focused copyright and privacy laws: OpenAI suggests that to ensure the US copyright system supports AI leadership, the government should focus on shaping international policy discussions around AI and copyright. The government should also actively assess the overall level of data available to American AI firms and determine whether other countries are restricting American companies' access to data and other critical inputs. Besides advocating for international AI standards based on American values, Google focuses its attention close to home. It adds that any national-level privacy regulation the government puts in place should treat anonymous data and publicly available data differently from personally identifiable information. This would prevent privacy laws from acting as an impdeiment in AI development. "Federal regulations can also encourage the use of AI-powered privacy-enhancing technologies to help protect Americans' data from malicious actors." it adds. Funding and support for AI projects: The government should foster AI innovation by speeding up funding allocations to early market research and development (R&D) and ensuring that scientists and research institutions have access to essential compute, high-quality datasets, and advanced models, Google suggests. It also urges the government to incentivize partnerships with national labs to advance research in science, cybersecurity, and chemical, biological, radiological, and nuclear (CBRN) risks. OpenAI suggests that the government should create AI economic zones to speed up permits for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors. Access to government data for training: Google and OpenAI seek access to government data for training purposes. "The U.S. government should make it easier for national security agencies and their partners to use commercial, unclassified storage and compute capabilities, and should take steps to release government datasets, which can be helpful for commercial training," Google argues. Similarly, OpenAI points out that a lot of government data today is in a non-digital format. Urging the government to digitise this data, it says that this could help AI developers in fields where the majority of training data is government-held. Access to government held data will not only boost AI development but will also be helpful if "shifting copyright rules restrict American companies' access to training data," OpenAI explains. MediaNama has sent out questions to Google and OpenAI about the pushback from artists against their comments and how their proposed copyright approaches protect the interests of artists. We will update the story with their comments once we hear back from them. Also read:
[24]
Google, OpenAI want 'license to steal' from publishers with AI...
Big Tech giants like Google and ChatGPT maker OpenAI are seeking a "license to steal" as they push the White House to allow them to train AI models on copyrighted material without proper compensation, one of the nation's largest publishers warned. More than 60 newspapers owned by Alden Global Capital - whose properties include the New York Daily News, the Chicago Tribune and the Denver Post - published an editorial on Monday demanding that the Trump administration reject "self-serving proposals" that could destroy the news industry. "Gutting generations of copyright protections for the benefit of AI bots would have a chilling effect not just on news organizations but also on all creative content creators, from novelists to playwrights to poets," the editorial said. "That iron-clad commitment to protecting the rights of owners of work they themselves created is precisely what distinguishes the United States from communist China, not the reverse." The plea came days after Google and Sam Altman-led OpenAI argued in letters sent to the Trump administration that copyright laws - which are essential for newspapers and other content creators to stop others from ripping off their work - must be rolled back to protect national security and allow the US to dominate the global AI race. Big Tech's request was also met with derision by a coalition of high-profile Hollywood actors - including known Trump critics like Mark Ruffalo and Olivia Wilde - who asked the White House to ensure copyright protections remain in place. "We firmly believe that America's global AI leadership must not come at the expense of our essential creative industries," said the letter signed by more than 400 Hollywood creatives. "AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music, and voices used to train AI models at the core of multi-billion dollar corporate valuations," the letter added. The Post reached out to the White House for comment. OpenAI and Google did not immediately return The Post's request for comment. Big Tech's proposals were submitted in response to the Trump White House's request for AI-related "action plans" that could be used to shape federal regulation. OpenAI tied its argument about loosening copyright law directly to national security - asserting that the US risked losing the AI race to China if it doesn't roll back protections. "The federal government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material," the Microsoft-backed company said. Meanwhile, Google pushed for what it called "balanced copyright rules" that would allow AI companies to train their models on protected work. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation," Google said in its letter. Industry advocates, such as the News Media Alliance - a nonprofit that represents more than 2,200 publishers, including The Post - have long argued that AI chatbots trained on copyrighted articles without proper credit or payment could cause "catastrophic" damage to cash-strapped publishers. In its own submission to the White House, the News Media Alliance noted that copyright-protected industries "contributed $2.09 trillion to the US GDP, amounting to almost 8% of the American economy." "AI companies rely on the long-criticized Chinese business practice of rampant copyright infringement to argue that we in America ought to abandon our historical commitment to protecting and promoting the development of intellectual property," the group said. "This argument wrongly suggests that American AI cannot compete without violating our laws. Nothing could be farther from the truth." Several Alden-owned newspapers are currently suing OpenAI and its chief backer Microsoft for copyright infringement. The New York Times has filed a similar lawsuit against the ChatGPT maker. News Corp, the media giant that owns The Post and the Wall Street Journal, believes "courtship is preferable to courtrooms," according to its CEO Robert Thomson. Last year, the company struck a content licensing deal with OpenAI reportedly worth more than $250 million that included guardrails to protect its work. "We would prefer to woo rather than sue, given that lawyers are the big winners in litigation," Thomson said last July. "But, be warned, if we don't woo you, we may very well sue you."
[25]
Don't buy Big Tech's 'we need to steal to beat China in AI' bull
Big Tech wants to use other people's work to make big money -- and not pay. Worse, it's hoping to get Washington to kosherize this theft as supposedly in the national interest. The specific issue at hand is the use of copyrighted material to train AI: The law is clear that techies looking to do that must pay for it. A federal court ruling in Thomson Reuters v. ROSS last month reaffirmed that understanding: Ross Intelligence, Inc. used Thomson's Westlaw product to train its own legal research programs, claiming it was covered under the "fair use" doctrine for free use of others' intellectual property. The Third Circuit didn't buy that: "Fair use" covers (for example), us quoting a New York Times editorial for the purpose of debunking it, or a critic quoting a song from a musical to explain why the show is great or dreadful. It does not cover using someone's intellectual property in order to compete with them. In the wake of that decision, Google and OpenAI each wrote the White House urging a rollback of copyright law as part of an "action plan" for boosting US development of AI; otherwise, goes the claim, China might win the AI race. No doubt they'd like exemptions from every other inconvenient law -- along with free electricity, and unpaid use of other people's real estate: It's all about beating Beijing! Though the president tapped Silicon Valley insider David Sacks as his AI czar, we expect Team Trump as a whole to see through this bull: Investment cash is flowing to AI developers by the billions; they can afford to pay a fair price for other people's work product. Indeed, Post parent company News Corp last year inked a deal, reportedly for $250 million (and with firm intellectual-property protections), to license some content to OpenAI. Meanwhile, The New York Times is suing ChatGPT's maker for copyright infringement, as Alden Global Capital (which owns the Daily News, Denver Post, Chicago Tribune and other papers) pursues its own case against OpenAI and Microsoft. Vast amounts of wealth are going for AI research; Big Tech can easily afford to pay for what it needs from other fields -- it neither needs nor deserves some privileged right to steal.
Share
Share
Copy Link
OpenAI and Google advocate for looser copyright restrictions on AI training data in their proposals for the US government's AI Action Plan, citing the need to compete with China and promote innovation.
In a bold move, OpenAI and Google have submitted proposals to the Trump administration's AI Action Plan, calling for significant changes to copyright laws that would benefit AI development. Both tech giants argue that access to copyrighted material is crucial for training AI models and maintaining America's competitive edge in the global AI race 12.
OpenAI, embroiled in several copyright lawsuits, including a major one with The New York Times, is pushing for AI training to be declared fair use. The company claims that their models transform copyrighted works rather than replicate them, aligning with the core objectives of copyright law 1. OpenAI warns that without such provisions, China could gain a significant advantage in AI development, potentially compromising US national security 3.
Google, facing similar legal challenges, echoes OpenAI's sentiments. The tech giant advocates for "balanced copyright rules" that would allow AI companies to use publicly available data, including copyrighted material, without lengthy negotiations with rightsholders 24. Google argues that such use would not significantly impact the original creators' rights.
Both companies are proposing closer collaboration between the private sector and the federal government. OpenAI suggests a "voluntary partnership" where AI companies would provide access to their models in exchange for regulatory relief and liability protections from state laws 15. Google recommends increased public-private partnerships and greater cooperation with federally funded research institutions 2.
The proposals extend beyond copyright issues. Google emphasizes the need for modernizing the nation's energy infrastructure to support the growing power demands of AI development 2. Both companies call for increased government funding in AI research and development, with Google specifically urging the release of government-held data sets for AI training 4.
OpenAI and Google express concerns about the patchwork of state AI laws, with over 800 bills introduced in 2025 alone 14. They argue that these varied regulations could hinder innovation and economic competitiveness. Both companies are advocating for federal legislation to create a more unified regulatory environment 4.
The proposals also touch on international policy. OpenAI urges the US to shape global discussions around copyright and AI, preventing less innovative countries from imposing restrictive legal regimes on American AI firms 1. Google calls for "balanced" export controls that protect national security while enabling US exports and global business operations 4.
These proposals are likely to face significant opposition from content creators and rights holders who argue that unrestricted use of copyrighted material for AI training threatens their livelihoods and creative output 13. The ongoing lawsuits and court battles highlight the contentious nature of this debate, with one landmark ruling already favoring rights holders in a case involving Thomson-Reuters' Westlaw 1.
As the AI industry continues to evolve rapidly, the tension between innovation and copyright protection remains a critical issue. The Trump administration's forthcoming AI Action Plan will play a crucial role in shaping the future of AI development and regulation in the United States 5.
Reference
[4]
Recent court rulings and ongoing debates highlight the complex intersection of AI, copyright law, and intellectual property rights, as the industry grapples with defining fair use in the age of machine learning.
2 Sources
2 Sources
As 2025 approaches, the AI industry faces crucial legal battles over copyright infringement, with potential outcomes that could significantly impact its future development and business models.
2 Sources
2 Sources
Over 400 celebrities and entertainment industry leaders have signed an open letter urging the Trump administration to protect copyright laws from AI companies seeking unrestricted access to copyrighted content for training purposes.
14 Sources
14 Sources
A new study by the AI Disclosures Project suggests that OpenAI may have used paywalled O'Reilly Media books to train its GPT-4o model without proper licensing, raising concerns about copyright infringement and the need for transparency in AI training data sources.
6 Sources
6 Sources
OpenAI proposes relaxing copyright laws to train AI models, sparking debate over intellectual property rights and AI development in the US.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved