10 Sources
[1]
Microsoft sued by authors over use of books in AI training
June 25 (Reuters) - Microsoft (MSFT.O), opens new tab has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, opens new tab, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first U.S. decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 for each work that Microsoft allegedly misused. Reporting by Blake Brittain in Washington, Editing by Alexia Garamfalvi and David Gregorio Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[2]
Group of high-profile authors sue Microsoft over use of their books in AI training
Writers alleged that company used nearly 200,000 pirated books to train its Megatron artificial intelligence A group of authors has accused Microsoft of using nearly 200,000 pirated books to create an artificial intelligence model, the latest allegation in the long legal fight over copyrighted works between creative professionals and technology companies. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its Megatron AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 for each work that Microsoft allegedly misused. Generative artificial intelligence products like Megatron produce text, music, images and videos in response to users' prompts. To create these models, software engineers amass enormous databases of media to program the AI to produce similar output. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an AI product that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained". Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under US copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first US decision on the legality of using copyrighted materials without permission for generative AI training. The day the complaint against Microsoft was filed, a California judge ruled in favor of Meta in a similar dispute over the use of copyrighted books used to train its AI models, though he attributed his ruling more to the plaintiffs' poor arguments than the strength of the tech giant's defense. The legal fight over copyright and AI began soon after the debut of ChatGPT and encompasses several different types of media. The New York Times has sued OpenAI for copyright infringement on its archive of articles; Dow Jones, parent company of the Wall Street Journal and the New York Post, has filed a similar suit against Perplexity AI. Major record labels have sued companies making AI-powered music generators. Photography company Getty Images has filed suit against Stability AI over the startup's text-to-image product. Just last week, Disney and NBC Universal sued Midjourney, which offers a popular AI image generator, for alleged misuse of some of the world's most famous movie and TV characters. Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. Sam Altman, CEO of OpenAI, said that the creation of ChatGPT would have been "impossible" without the use of copyrighted works.
[3]
Authors take Microsoft to court in yet another AI v copyright battle
The latest complaint comes as Meta and Anthropic both receive legal relief in similar copyright lawsuits. A group of authors have filed a lawsuit against Microsoft, accusing the tech giant of using copyrighted works to train its large language model (LLM). The class action complaint filed by 10 authors and professors, including Pulitzer prize winner Kai Bird and Whitting award winner Victor LaVelle claim that Microsoft ignored the law by downloading around 200,000 copyrighted works and feeding it to the company's Megatron-Turing Natural Language Generation model. The end result, the plaintiffs claim, is an AI model able to generate expressions that mimic the authors' manner of writing and the themes in their work. "Microsoft's commercial gain has come at the expense of creators and rightsholders," the lawsuit states. The complaint seeks to not just represent the plaintiffs, but similar copyright holders under the US Copyright Act. The aggrieved party seeks damages of up to $150,000 per infringed work, as well as an injunction prohibiting Microsoft from using any of their works. This latest lawsuit is yet another that seeks to challenge how AI models are trained. Visual artists, news publishers and authors are just some of the classes of creators who claim that AI models infringe upon their rights. However, yesterday (25 June), a US court ruled that Meta's training of AI models on copyrighted books fell under the "fair use" doctrine of copyright law. The lawsuit was brought on by authors Richard Kadrey, Christopher Golden and Sarah Silverman back in 2023. Earlier this year, the trio's counsel claimed that Meta allowed Llama, its LLM, to commit copyright infringement on pirated data and upload it for commercial gain. In the decision yesterday, the judge said that the plaintiffs "made the wrong arguments," ultimately failing to prove their case. However, he also added that the ruling does not mean that Meta's use of copyrighted materials to train its LLM is lawful. The judge ruled that in this case, Meta's use of copyrighted works was "transformative". While in another blow to authors, a different US court earlier this week ruled that Anthropic's use of books to train Claude AI also qualifies as "fair use". This case was brought by another trio of authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson in 2024, who claimed that Anthropic used pirated versions of various copyrighted material to train Claude, its flagship AI model. However, "Claude created no exact copy, nor any substantial knock-off. Nothing traceable to [the plaintiffs'] works," the judge wrote in his summary judgement. Although, it appears that Big Tech companies, at times, acknowledge the role copyright holders play in creating the primary data from which their AI models extrapolate from. Last year, Bloomberg reported that Microsoft and publishing giant HarperCollins signed a content licensing deal where the tech giant could use some of HarperCollins' books for AI training. While AI search engine Perplexity, which has repeatedly come under fire for allegedly scraping content from news publishers, also launched a revenue sharing platform with publishers after receiving backlash. Meanwhile OpenAI has a content-sharing deal for ChatGPT with more than 160 outlets in several languages. Earlier this year, Thomson Reuters CPO David Wong told SiliconRepublic.com that not only is it possible to create AI systems that respect copyright, but that respecting copyright will further those systems and improve accessibility to information. Recent rulings seem to place Big Tech as the emerging winner in the AI-fair use battle. Still, companies such as OpenAI and Microsoft continue to battle similar lawsuits. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[4]
Microsoft sued by authors over alleged use of 200,000 pirated books to train AI
The upcoming lack of protection can increase the risk of Windows 10 users being hacked and online threats: Microsoft's official end of protection. Microsoft MSFT.O has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first U.S. decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 for each work that Microsoft allegedly misused.
[5]
Microsoft Sued by Authors Over Use of Books in AI Training
Tech firms have argued that they make fair use of copyrighted materials Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron Artificial Intelligence (AI) model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under US copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first US decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 (roughly Rs. 1.28 crore) for each work that Microsoft allegedly misused. © Thomson Reuters 2025
[6]
Microsoft sued by authors over use of books in AI training - The Economic Times
A group of authors sued Microsoft, alleging it used nearly 200,000 pirated books to train its Megatron AI without permission. Filed in New York, the lawsuit seeks damages and a ban on further use. It's part of broader legal challenges facing AI firms over unauthorised training data use.Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first U.S. decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 for each work that Microsoft allegedly misused.
[7]
AI and copyrights: The fight for fair use - The Economic Times
Big tech firms like Meta, Microsoft, OpenAI, and Anthropic face lawsuits over using copyrighted books to train AI without permission. Courts are examining "fair use" in AI training, with mixed rulings. Authors demand payment, while companies claim fair use, sparking ongoing legal battles over AI and copyright rights.Big tech companies Meta, Microsoft, OpenAI and Anthropic have been facing a growing number of lawsuits. Authors and creators say these companies are using their books and other creative works to train powerful AI without permission or payment. These cases highlight how "fair use" works in the age of artificial intelligence (AI). Recently, Meta won a lawsuit from 2023, against a group of authors who claimed the tech major used their copyrighted books to train its AI without their permission. The judge, Vince Chhabria, sided with Meta, saying the authors didn't make the right arguments and didn't have enough proof. However, the judge also said that using copyrighted works to train AI could still be against the law in "many situations." This decision is similar to another case involving Anthropic, another AI firm. In that case, Judge William Alsup said Anthropic's use of books for training was "exceedingly transformative", meaning it changed the original work so much it fell under fair use. According to Fortune magazine's website, copyrighted material can be used without permission under the fair use doctrine if the use transforms the work, by serving a new purpose or adding new meaning, instead of merely copying the original. However, the judge also found Anthropic broke the law by keeping pirated copies of the books in a digital library and has ordered a separate trial on that matter, to determine its liability, if any. This was the first time a US court ruled on whether using copyrighted material without permission for AI training is legal. The legal battles continue in a new lawsuit in New York, in which authors, including Kai Bird, Jia Tolentino, and Daniel Okrent, are accusing Microsoft of using nearly 200,000 pirated digital books to train its Megatron AI. In April, OpenAI faced several copyright cases brought by prominent authors and news outlets. "We welcome this development and look forward to making it clear in court that our models are trained on publicly available data, grounded in fair use, and supportive of innovation," an OpenAI spokesperson said at that time, as reported by Reuters. These lawsuits show a big disagreement between tech companies and people who own copyrights. Companies often say their use is "fair use" to avoid paying for licences. But authors and other creators want to be paid when their work helps power these new AI systems.
[8]
Microsoft sued by authors over use of books in AI training
Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first U.S. decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to US$150,000 for each work that Microsoft allegedly misused.
[9]
Microsoft sued by authors over use of books in AI training
(Reuters) -Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. It was the first U.S. decision on the legality of using copyrighted materials without permission for generative AI training. Spokespeople for Microsoft did not immediately respond to a request for comment on the lawsuit. An attorney for the authors declined to comment. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. The complaint said Microsoft used the pirated dataset to create a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained." Tech companies have argued that they make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to $150,000 for each work that Microsoft allegedly misused. (Reporting by Blake Brittain in Washington, Editing by Alexia Garamfalvi and David Gregorio)
[10]
Microsoft accused of using 2 lakh copyrighted books for AI training: Here's what happened
Case could redefine fair use and reshape how tech giants train AI with written works. The race to dominate the AI sector is constantly evolving, but it seems Microsoft may have taken a few creative shortcuts, and a growing group of authors isn't letting it slide. On June 25, a lawsuit filed in a New York federal court accused the tech giant of using over 200,000 pirated books to train its AI models. The plaintiffs? A formidable lineup of writers including Kai Bird (Pulitzer Prize-winner), Jia Tolentino (New Yorker staffer), and Daniel Okrent (former NYT public editor). The charge is that Microsoft trained its powerful AI on their copyrighted works without permission, payment, or even a heads-up. The case strikes at a fundamental tension in the AI era: how do you teach machines to understand language without violating the rights of those who create it? The lawsuit claims Microsoft relied on a shadow dataset filled with pirated books, digital versions of published works scraped from the web. This content, according to the complaint, was used to fine-tune powerful large language models like Megatron and possibly others under Microsoft's umbrella. Unlike tech manuals or public domain novels, the plaintiffs say these were contemporary, copyrighted books, and the AI's output often mimics their structure, tone, and narrative style. One example cited in the suit allegedly shows AI-generated text that reflects the distinctive voice of a plaintiff author, suggesting the model didn't just learn from books, it absorbed them. The authors are demanding an injunction to stop further use of their works, plus damages that could hit $150,000 per title. Multiplied across thousands of books, the figure could balloon into the billions. This lawsuit comes on the heels of a significant ruling in California just a day earlier. In a case involving Anthropic, another AI firm, a judge ruled that training on lawfully obtained content might qualify as fair use but pirated works definitely don't. That precedent may now come back to haunt Microsoft. Also read: Fair use vs copyright: Anthropic's case and its impact on AI training The court drew a key line: it's one thing to train a model using licensed or publicly available texts. But once AI companies dip into the vast sea of pirated literature online, they cross into clearly illegal territory. Why this case matters So far, the AI industry has largely operated in a legal grey zone. Developers argue that ingesting vast amounts of text, images, and code is necessary for building capable model and that doing so is covered under fair use laws. But creators say it's outright theft. This lawsuit joins a rising tide of legal action against AI companies. The New York Times is suing Microsoft and OpenAI in a landmark case. Comedian Sarah Silverman and other authors have launched suits against Meta and OpenAI. And musicians and visual artists are demanding AI companies stop using their work to generate lookalikes. For Microsoft, the stakes are enormous. If courts start siding with creators, it may force the company and its competitors to fundamentally rethink how AI systems are trained. That could mean licensing fees, royalties, and legal accountability for every dataset, every model, every release. Beyond the legal aspect, there is a broader question: What is creativity worth in the AI age? Authors spend years writing books. AI can churn out paragraphs in seconds. If machines are learning from writers but the writers are left out of the loop,financially and ethically,can the system ever be considered fair? Whether Microsoft will settle, fight, or lose remains to be seen. But one thing is certain: this lawsuit could become a defining moment in the ongoing battle between creators and coders. And this time, the book isn't closed yet.
Share
Copy Link
A group of authors has filed a lawsuit against Microsoft, claiming the company used nearly 200,000 pirated books to train its Megatron AI model without permission, seeking damages and an injunction.
In a significant development at the intersection of artificial intelligence and copyright law, Microsoft has been hit with a lawsuit by a group of prominent authors. The plaintiffs, including Pulitzer Prize winner Kai Bird and Whiting Award recipient Victor LaVelle, allege that the tech giant used pirated digital versions of their books to train its Megatron artificial intelligence model without permission 1.
Source: NDTV Gadgets 360
The lawsuit, filed in New York federal court, claims that Microsoft utilized a collection of nearly 200,000 pirated books to train Megatron, an algorithm designed to generate text responses to user prompts. The authors argue that this practice has resulted in an AI model capable of mimicking their writing styles and themes 2.
The complaint states that Microsoft's actions have created a "computer model that is not only built on the work of thousands of creators and authors, but also built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained" 3.
The plaintiffs are seeking substantial damages, requesting up to $150,000 for each work allegedly misused by Microsoft. Additionally, they are pursuing a court order to block further infringement by the company 4.
This lawsuit is part of a larger trend of legal challenges brought by authors, news outlets, and other copyright holders against tech companies, including Meta Platforms, Anthropic, and OpenAI. These cases center on the alleged misuse of copyrighted material in AI training 5.
Source: Economic Times
The Microsoft case follows closely on the heels of two significant rulings in similar disputes:
A California federal judge ruled that Anthropic's use of authors' material to train its AI systems constituted fair use under U.S. copyright law, though the company may still face liability for pirating books.
In a separate case, a judge ruled in favor of Meta, finding that its use of copyrighted books to train AI models was "transformative" and fell under fair use 2.
Tech companies have consistently argued that their use of copyrighted material falls under fair use, asserting that they create new, transformative content. They contend that being required to pay copyright holders could significantly impede the growth of the AI industry 1.
Source: Economic Times
This lawsuit against Microsoft, along with similar cases, highlights the ongoing tension between technological innovation and intellectual property rights. The outcome of these legal battles could have far-reaching implications for the development and deployment of AI technologies, potentially reshaping how companies approach AI training and data acquisition 3.
Google's AI-generated summaries in search results have sparked an EU antitrust complaint from independent publishers, citing harm to traffic, readership, and revenue.
5 Sources
Policy and Regulation
11 hrs ago
5 Sources
Policy and Regulation
11 hrs ago
An Xbox executive's suggestion to use AI tools for emotional support and career guidance following Microsoft's layoffs has sparked controversy and criticism within the gaming industry.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Billionaire Mark Cuban forecasts that AI's untapped potential could lead to unprecedented wealth creation, possibly producing the world's first trillionaire from an unexpected source.
2 Sources
Technology
11 hrs ago
2 Sources
Technology
11 hrs ago
Meta's aggressive AI talent recruitment efforts, including reports of massive bonuses, have been called into question by a former OpenAI researcher who joined the company.
2 Sources
Business and Economy
11 hrs ago
2 Sources
Business and Economy
11 hrs ago
The US plans to restrict AI chip exports to Malaysia and Thailand to prevent China from accessing advanced processors through intermediaries, as part of its "AI Diffusion" policy.
2 Sources
Policy and Regulation
3 hrs ago
2 Sources
Policy and Regulation
3 hrs ago