11 Sources
[1]
Getty's landmark UK lawsuit on copyright and AI set to begin
LONDON, June 9 (Reuters) - Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI begins at London's High Court on Monday, with the photo provider's case likely to set a key precedent for the law on AI. The Seattle-based company, which produces editorial content and creative stock images and video, accuses Stability AI of breaching its copyright by using its images to "train" its Stable Diffusion system, which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP (WPP.L), opens new tab - is fighting the case and denies infringing any of Getty's rights. A Stability AI spokesperson said that "the wider dispute is about technological innovation and freedom of ideas," adding: "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression." Getty's case is one of several lawsuits brought in Britain, the U.S. and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. WIDER IMPACT Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. She added that a victory for Getty could mean that Stability AI and other developers will face further lawsuits. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Reporting by Sam Tobin; Editing by Andrew Heavens Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligenceHuman Rights
[2]
Getty Images and Stability AI face off in British copyright trial that will test AI industry
LONDON (AP) -- Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry. Opening arguments before a judge at the British High Court are scheduled for Monday. The trial could last for three weeks. Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later. Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved "brazen infringement" of Getty's photography collection "on a staggering scale." Tech companies have long argued that "fair use" or "fair dealing" legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023. "What Stability did was inappropriate," Getty CEO Craig Peters told The Associated Press in 2023. He said creators of intellectual property should be asked for permission before their works are fed into AI systems rather than having to participate in an "opt-out regime." Stability has argued that the case doesn't belong in the United Kingdom because the training of the AI model technically happened elsewhere, on computers run by U.S. tech giant Amazon. Similar cases in the U.S. have not yet gone to trial. Stable Diffusion's roots trace to Germany, where computer scientists at Ludwig Maximilian University of Munich worked with the New York-based tech company Runway to develop the original algorithms. The university researchers credited Stability AI for providing the servers that trained the models, which require large amounts of computing power.
[3]
London AI firm says Getty copyright case poses 'overt threat' to industry
A London-based artificial intelligence company, Stability AI, has claimed that a copyright case brought by the global photography agency Getty Images represents an "overt threat" to the generative AI industry. Getty's case against Stability AI for copyright and trademark infringement relating to its vast photography archives reached the high court in London on Monday. Stability allows users to generate images using text prompts, and its directors include James Cameron, the Oscar-winning film director of Avatar and Titanic. But Getty called the people who were training the AI system "a bunch of tech geeks" and claimed they were indifferent to the problems their innovation might create. Stability countered by alleging that Getty was using "fanciful" legal routes and spending approximately Β£10m to fight a technology it feared was "an existential threat" to its business. Getty syndicates the work of about 50,000 photographers to customers in more than 200 countries. It alleges Stability trained its image generation model on its vast database of copyrighted photographs. As a result the program, called Stability Diffusion, outputs images with Getty Images watermarks still on them. Getty alleges that Stability was "completely indifferent to what they fed into the training data". It told the court the system amounted to "sticking our trademark on pornography" and "AI rubbish". Lawyers for Getty said the dispute over the unlicensed use of thousands of its photographs, including images of celebrities, politicians and news events, "is not a battle between creatives and technology, where a win for Getty Images means the end of AI". They added: "The problem is when AI companies such as Stability want to use those works without payment." Lindsay Lane KC, representing Getty Images, said: "This was a bunch of tech geeks who were so excited by AI that they were indifferent to any of the dangers or problems it presents." In submissions to the court on Monday Getty claimed Stability had trained its image generation model on databases that contained child sexual abuse material. Stability is fighting the overall Getty claim and its lawyer said the allegations relating to child sexual abuse material were "repugnant". A spokesperson for Stability AI said it was committed to preventing misuse of its technology, "particularly in the creation and dissemination of harmful content, including CSAM [child sexual abuse material]". It said it had robust safeguards "to enhance our safety standards to protect against bad actors". The case comes amid a wider campaign from artists, writers and musicians, including Elton John and Dua Lipa, to protect their copyright from alleged theft by generative AI companies, which then use it to allow their customers to create new pictures, music or text. The UK parliament is locked in a similar dispute after the government proposed that copyright holders would have to opt out of their material being used to train algorithms and produce AI-generated content, otherwise it would be free to use by tech companies. "Getty Images, of course, recognises that the AI industry overall may be a force for good, but that doesn't justify allowing those developing AI models to ride roughshod over intellectual property rights," said Lane. The trial, which is scheduled to run for several weeks, will focus in part on the use of images by celebrated photographers, including photos of the former Liverpool football coach JΓΌrgen Klopp taken by the award-winning British sports photographer Andrew Livesey; a picture of the Chicago Cubs baseball team by Gregory Shamus, an American sports photographer; a photo of the actor and musician Donald Glover by Alberto Rodriguez; and photos of the actor Eric Dane and film director Christopher Nolan by Andreas Rentz. Seventy-eight thousand pages of evidence have been disclosed in the case and AI experts are being called to give evidence from the University of California, Berkeley and the University of Freiberg in Germany.
[4]
Getty's Landmark AI scraping trial begins in UK
Getty images is suing an AI company over copyright issues. Credit: J Studios / Getty Images Generative AI is new enough that its training data seems to be constantly up for debate -- and that debate is heading to the courts in the UK in a new, sure-to-be landmark case between Getty Images and Stability AI. In January 2023, Getty Images announced it was suing Stability AI for allegedly using its photos to train AI models without permission, violating existing copyright law. Getty accused Stability AI of copying and processing "millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI's commercial interests and to the detriment of the content creators," the company said in a press release at the time. Getty Images noted that they have provided licenses to other tech companies seeking to use their photos "for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights." Stability AI, the company said, didn't pursue a license with them "and instead, we believe, chose to ignore viable licensing options and longβstanding legal protections in pursuit of their standβalone commercial interests," the press release read. In response, Stability AI told Out-Law at the time that it takes "these matters seriously" and that it the company only learned about the lawsuit "via the press." Now, on June 9, London's High Court is finally hearing the case. Getty brought a similar lawsuit against Stability AI in the U.S., according to Reuters, but it has yet to hit the courts. Stability argues that the training didn't take place in the UK and that the images generated from the AI don't use Getty's copyrighted works, Hack Read reported. According to the Associated Press, Stability said just a "tiny portion" of the outputs from its image-generator "look at all similar" to Getty images. Reuters reported that a spokesperson for Stability AI said before the trial that "the wider dispute is about technological innovation and freedom of ideas." "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said, according to Reuters. Stability AI lawyer Hugo Cuddingan argued that Getty's lawsuit poses an "overt threat" to "the wider generative AI industry," but Getty's lawyers argue that this isn't about AI; it's about copyright law. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court, according to Reuters, adding: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment."
[5]
Getty Images Face Off Against Stability in Court as First Major AI Copyright Trial Begins
The trial involving Getty Images and Stability AI began in London this week, and the AI company has claimed in its opening statement that the copyright case represents an "overt threat" to the entire industry. Getty rejects that motion arguing in court yesterday that it "is not a battle between creatives and technology, where a win for Getty Images means the end of AI." Getty has accused Stability AI of using thousands of its copyrighted photographs to train its AI image generator model Stable Diffusion in a "brazen infringement" on a "staggering scale". In the past, Getty has shown that Stable Diffusion could reproduce an image with the Getty watermark on. According to a report in The Guardian, Getty alleges that Stability engineers were "completely indifferent to what they fed into the training data". The photo agency says that it amounted to "sticking our trademark on pornography" and "AI rubbish". "Getty Images, of course, recognizes that the AI industry overall may be a force for good, but that doesn't justify allowing those developing AI models to ride roughshod over intellectual property rights," says Getty Images representative, Lindsay Lane KC. Getty says it is a problem that AI companies, such as Stability, use its work without payment. The Seattle-based company represents almost 600,000 content creators around the world -- a good chunk of those are photographers. "This was a bunch of tech geeks who were so excited by AI that they were indifferent to any of the dangers or problems it presents," adds Lane. The Guardian notes that the trial will focus on specific photos taken by famous photographers. Getty plans to bring up photos of the Chicago Cubs taken by sports photographer Gregory Shamus and photos of film director Christopher Nolan taken by Andreas Rentz. All-in-all, 78,000 pages of evidence have been disclosed for the case and AI experts are being called in to give testimonies. Getty is also suing Stability AI in the United States in a parallel case. The trial in London is expected to run for three weeks and will be followed by a written decision from the judge at a later date. Getty also alleges that Stable Diffusion had child sexual abuse material within its training data as well as Getty's content. A claim that Stability's lawyer calls "repugnant," adding that it has protections in place "to enhance our safety standards to protect against bad actors". Last month, Getty CEO Craig Peters revealed that the company is spending "millions and millions" of dollars on the case. Stability noted this in its opening statement alleging Getty is using "fanciful" legal routes to fight a technology that it fears is an "existential threat" to its business.
[6]
Getty Images and Stability AI face off in British copyright trial that will test AI industry
LONDON (AP) -- Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry. Opening arguments before a judge at the British High Court are scheduled for Monday. The trial could last for three weeks. Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later. Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved "brazen infringement" of Getty's photography collection "on a staggering scale." Tech companies have long argued that "fair use" or "fair dealing" legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023. "What Stability did was inappropriate," Getty CEO Craig Peters told The Associated Press in 2023. He said creators of intellectual property should be asked for permission before their works are fed into AI systems rather than having to participate in an "opt-out regime." Stability has argued that the case doesn't belong in the United Kingdom because the training of the AI model technically happened elsewhere, on computers run by U.S. tech giant Amazon. Similar cases in the U.S. have not yet gone to trial. Stable Diffusion's roots trace to Germany, where computer scientists at Ludwig Maximilian University of Munich worked with the New York-based tech company Runway to develop the original algorithms. The university researchers credited Stability AI for providing the servers that trained the models, which require large amounts of computing power.
[7]
Getty Images and Stability AI face off in British copyright trial that will test AI industry
LONDON -- Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry. Opening arguments before a judge at the British High Court are scheduled for Monday. The trial could last for three weeks. Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later. Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved "brazen infringement" of Getty's photography collection "on a staggering scale." Tech companies have long argued that "fair use" or "fair dealing" legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023. "What Stability did was inappropriate," Getty CEO Craig Peters told The Associated Press in 2023. He said creators of intellectual property should be asked for permission before their works are fed into AI systems rather than having to participate in an "opt-out regime." Stability has argued that the case doesn't belong in the United Kingdom because the training of the AI model technically happened elsewhere, on computers run by U.S. tech giant Amazon. Similar cases in the U.S. have not yet gone to trial. Stable Diffusion's roots trace to Germany, where computer scientists at Ludwig Maximilian University of Munich worked with the New York-based tech company Runway to develop the original algorithms. The university researchers credited Stability AI for providing the servers that trained the models, which require large amounts of computing power.
[8]
Getty's landmark UK lawsuit on copyright and AI set to begin
A landmark copyright lawsuit between Getty Images and Stability AI commenced at London's High Court, potentially setting a precedent for AI law. Getty accuses Stability AI of unlawfully using its images to train its Stable Diffusion system. Stability AI denies infringement, arguing its tools foster innovation and artistic freedom.Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI begins at London's High Court on Monday, with the photo provider's case likely to set a key precedent for the law on AI. The Seattle-based company, which produces editorial content and creative stock images and video, accuses Stability AI of breaching its copyright by using its images to "train" its Stable Diffusion system, which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI -- which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP -- is fighting the case and denies infringing any of Getty's rights. A Stability AI spokesperson said that "the wider dispute is about technological innovation and freedom of ideas," adding: "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression." Getty's case is one of several lawsuits brought in Britain, the U.S. and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Wider impact Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. She added that a victory for Getty could mean that Stability AI and other developers will face further lawsuits. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".
[9]
Getty Sues Stability AI Over Copyrighted Image Scraping
A High Court in London has started hearing the lawsuit that Getty Images has filed against Stability AI, an AI-based text-to-image generator. Getty accuses Stability AI of unlawfully scraping millions of copyrighted images to train their LLMs (Large Language Models). Justice Joanna Smith, the British High Court judge hearing the case, described it as "highly complex and technical," raising "numerous novel issues for the court to consider." She made these remarks during one of the four hearings held earlier this year, before the trial began on June 9, 2025. Getty Images is a licensing platform for copyrighted multimedia content, including photos, illustrations, vectors, videos, and audio files. The lawsuit against Stability AI was filed in 2023. In January, Getty Images announced in a press release that it had initiated legal proceedings against Stability AI in the High Court of Justice in London. It claimed that the AI startup had infringed copyrights owned or represented by Getty Images. "Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images," reads Getty's position. Getty stated that it has licensed its content to other companies for training artificial intelligence systems while respecting personal and intellectual property rights. However, it claims that Stability AI did not seek such a license, instead pursuing "their stand-alone commercial interests" in creating Stable Diffusion, a deep learning text-to-image model released in 2022. Getty's Terms of Use explicitly prohibit "using any data mining, robots, or similar data gathering or extraction methods." Additionally, Getty raised new allegations about Stability's potential to produce child sexual abuse material (CSAM) and apply Getty's watermark on AI-generated images. Justice Smith advised Getty to plead these allegations separately and clarified that Getty's reference to "pornography and violent images" does not fall under criminal acts, unlike the creation of CSAM. Oliver Fairhurst, a copyright lawyer specialising in AI, called this case "an absolute behemoth." He outlined the following legal issues involved: Trademark and copyright infringements are common in such lawsuits. A trademark is any graphical representation distinguishing a person's goods or services from others in the marketplace, while copyright grants exclusive rights to the creator of an original work. Database rights infringement is slightly different but falls under the broader umbrella of copyright law. A specific provision under the UK's Copyright and Rights in Databases Regulations 1997 protects data defined as a "collection of independent works, data, or other materials arranged in a systematic or methodical way." Similarly, "passing off" refers to the unauthorised use of another person's goods, services, or business goodwill, potentially causing confusion or deception in the market and leading to unfair competition. Getty claims that Stability must now prove it did not violate any copyrights, as the AI-generated output closely resembles the original images. Lindsay Lane KC, arguing for Getty, stated that holding users responsible for AI-generated images would assume an unrealistic level of technical literacy. She also highlighted that users might assume the watermarked images have legitimate permission, thereby reinforcing the trademark violation argument. Stability Diffusion uses the dataset called LAION-5B, developed by The Machine Vision & Learning Group at Heidelberg University, led by Germany-based Computer scientist BjΓΆrn Ommer. Researchers released LAION-5B, a large-scale open dataset, under the Creative Commons (CC-BY 4.0) license. The dataset consists of 5.85 billion links to images hosted online, paired with short descriptions. When a user enters a prompt, the AI system checks whether the keyword exists in its dataset and identifies which images match the prompt. The AI then generates images from random noise using a process called diffusion, which involves corrupting the image by adding noise. In the reverse diffusion (or denoising) process, the AI restores the corrupted image into a recognisable form -- the final AI-generated content. It is important to note that Stability AI trained its model on the LAION-5B dataset but does not store the original images. Instead, the model learns patterns and structural similarities to generate "new" images based on user prompts. The legal complexity arises from the fact that Stability AI's current model may not contain exact replicas of Getty images -- if it used them at all. Instead, it generates images with structural similarities. Even so, if copyrighted material was used, even temporarily, it could amount to infringement. According to Section 17(6) of the UK's Copyright, Designs and Patents Act 1988, making transient copies "incidental to some other use of the work" can still be a copyright violation. For instance, in 2008, the UK Supreme Court ruled that "making transient copies of copyright material into Random Access Memory (RAM)" is a restricted act. The case examined the legality of "effective technological measures" used by video game console manufacturers to prevent piracy. In 2023, Emad Mostaque, founder of Stability AI, claimed on X that the company's training data were "ethically, morally, and legally sourced and used." However, during the ongoing trial, Stability did acknowledge that there were some instances where the trademarks appeared on AI-generated images. Hugo Cuddigan KC, arguing for Stability, said the images submitted by Getty were "contrived," implying they were artificially generated solely for litigation purposes. Before the trial began on June 9, a spokesperson for Stability stated that the dispute concerns "technological innovation and freedom of ideas." They argued that enabling artists to use these tools falls under fair use and freedom of expression. Stability AI may defend itself by comparing its denoising and data-scraping processes to how humans browse the internet using search engines, which is not considered copyright infringement. Fairhurst noted that Stability AI could argue it used copyrighted content under fair use, having applied creativity to build something new. It might also claim protection under certain legal exceptions and data mining provisions, such as: Since Stable Diffusion was developed in partnership with a German university, Stability AI could argue the training was for non-commercial academic research. However, post-Brexit, the UK no longer falls under EU jurisdiction, so Article 4 does not apply. Therefore, the provisions of Section 29A of Copyright, Designs and Patents Act may provide Stability with limited respite. The AI company may also claim the rightful use of data under Section 28A and 29A of Copyright, Designs and Patents Act 198 in the UK, which allows the making of copies and data mining, but only if there is no "independent economic significance" and only if used for "non-commercial research." Stability AI released its Stabled Diffusion model under the MIT license, a permissive open-source license. The company may still invoke Sections 28A and 29A of the UK Act, which allow limited copying and data mining, but only when the activity lacks "independent economic significance" and is strictly non-commercial. While Stability AI released Stable Diffusion under the permissive MIT license, it also monetises tools like DreamStudio, blending open-source development with commercial services. Getty argues that Stability AI built commercial products on the back of content acquired -- allegedly -- without proper licensing.
[10]
Getty Images and Stability AI face off in British copyright trial that will test AI industry
LONDON -- Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry. Opening arguments before a judge at the British High Court began on Monday. The trial could last for three weeks. Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later. Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved "brazen infringement" of Getty's photography collection "on a staggering scale." Tech companies have long argued that "fair use" or "fair dealing" legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023. "What Stability did was inappropriate," Getty CEO Craig Peters told The Associated Press in 2023. He said creators of intellectual property should be asked for permission before their works are fed into AI systems rather than having to participate in an "opt-out regime." Getty's legal team told the court Monday that its position is that the case isn't a battle between the creative and technology industries and that the two can still work together in "synergistic harmony" because licensing creative works is critical to AI's success. "The problem is when AI companies such as Stability AI want to use those works without payment," Getty's trial lawyer, Lindsay Lane, said. She said the case was about "straightforward enforcement of intellectual property rights," including copyright, trademark and database rights. Getty Images "recognizes that the AI industry is a force for good but that doesn't justify those developing AI models to ride roughshod over intellectual property rights," Lane said. Stability AI had a "voracious appetite" for images to train its AI model, but the company was "completely indifferent to the nature of those works," Lane said. Stability didn't care if images were protected by copyright, had watermarks, were not safe for work or were pornographic and just wanted to get its model to the market as soon as possible, Lane said. "This trial is the day of reckoning for that approach," she said. Stability has argued that the case doesn't belong in the United Kingdom because the training of the AI model technically happened elsewhere, on computers run by U.S. tech giant Amazon. Similar cases in the U.S. have not yet gone to trial. In the years after introducing its open-source technology, Stability struggled to capitalize on the popularity of the tool, battling lawsuits, misuse and other business problems. Stable Diffusion's roots trace to Germany, where computer scientists at Ludwig Maximilian University of Munich worked with the New York-based tech company Runway to develop the original algorithms. The university researchers credited Stability AI for providing the servers that trained the models, which require large amounts of computing power. Stability later blamed Runway for releasing an early version of Stable Diffusion that was used to produce abusive sexual images, but also said it would have exclusive control of more recent versions of the AI model. Stability last year announced what it described as a "significant" infusion of money from new investors including Facebook's former president Sean Parker, who is now chair of Stability's board. Parker also has experience in copyright disputes as the co-founder of online music company Napster, which temporarily shuttered in the early 2000s after the record industry and popular rock band Metallica sued over copyright violations. The new investments came after Stability's founding CEO Emad Mostaque quit and several top researchers left to form a new German startup, Black Forest Labs, which makes a competing AI image generator. -- -- Kelvin Chan And Matt O'Brien, The Associated Press
[11]
Getty argues its landmark UK copyright case does not threaten AI
LONDON (Reuters) -Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI began at London's High Court on Monday, with Getty rejecting Stability AI's contention the case posed a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system, which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." WIDER IMPACT Getty's case is one of several lawsuits brought in Britain, the U.S. and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". (Reporting by Sam Tobin; Editing by Andrew Heavens)
Share
Copy Link
Getty Images and Stability AI begin a high-stakes copyright trial in London, addressing the use of copyrighted images in AI training. The case could set a significant precedent for AI and copyright law.
In a landmark case that could shape the future of artificial intelligence (AI) and copyright law, Getty Images has initiated legal proceedings against Stability AI in London's High Court. The trial, which began on June 9, 2025, centers around allegations that Stability AI unlawfully used millions of Getty's copyrighted images to train its AI image generation model, Stable Diffusion 1.
Source: MediaNama
Getty Images, a Seattle-based company that produces and syndicates editorial content and stock images, accuses Stability AI of "brazen infringement" on a "staggering scale" 2. The photo agency claims that Stability AI scraped millions of images from its websites without permission, using them to train and develop the Stable Diffusion system 1.
Stability AI, which has raised hundreds of millions in funding and secured investment from WPP, the world's largest advertising company, denies infringing on Getty's rights 1. The company argues that the case doesn't belong in the UK, as the AI model's training technically occurred on Amazon's computers in the United States [2](https://apnews.com/article/getty-images-stability-ai-copyright-trial-stable-diffusion-580ba200a3296c87207983f04cda4680].
Source: Mashable
This trial is seen as a pivotal moment for the AI industry, with potential ramifications extending far beyond the immediate parties involved. Stability AI has claimed that Getty's lawsuit poses an "overt threat" to the entire generative AI sector 3.
Getty Images maintains that the dispute is not about stifling AI innovation but about protecting intellectual property rights. Lindsay Lane KC, representing Getty, stated, "This is not a battle between creatives and technology, where a win for Getty Images means the end of AI" 4. Getty argues that AI companies should seek proper licensing for the use of copyrighted materials in their training processes.
Source: AP NEWS
The case highlights the ongoing struggle between technological advancement and copyright protection in the digital age. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material 1. Prominent figures, including Elton John, have called for greater protections for artists in the face of AI advancements 3.
Legal experts suggest that the outcome of this trial could have far-reaching consequences. Rebecca Newman, a lawyer at Addleshaw Goddard, noted, "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI" 1. A victory for Getty could potentially open the floodgates for similar lawsuits against AI developers.
As the trial unfolds over the next three weeks, the tech and creative industries alike will be watching closely. The court's decision could significantly influence market practices and the UK's attractiveness as a jurisdiction for AI development 5.
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
18 hrs ago
11 Sources
Technology
18 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher COβ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
10 hrs ago
8 Sources
Technology
10 hrs ago