2 Sources
2 Sources
[1]
Is there a clear picture of useful precedent set by Getty's copyright case against Stability AI? (Spoiler - not in focus)
Back in January 2023, Getty Images became the first media business to begin legal proceedings against an AI company in the UK over alleged unlawful use of its content. It wouldn't - and won't - be the last, of course, but late last month a ruling came down on this particular long-running saga that...well, doesn't exactly set much of a precedent to help deal with future incidents of this nature. Getty had taken action against Stability AI, alleging that its generative model, Stable Diffusion, which produces AI-generated images based on user prompts, breached copyright. Getty contended that these included around 12.3 million visual assets from Getty Images as well as publicly-accessible third party websites. The legal position in the UK - and geography is going to become very important in this case - is that the Copyright, Designs and Patents Act 1988 (CDPA) protects against the unauthorised reproduction, distribution and adaptation of works. What's rapidly become unclear is the legality of using copyrighted material to train general AI models - AI firms call it 'fair use' of materials; content creators/owners call it naked theft. Among the various charges made against Stability AI by Getty were that the firm scraped millions of copyright-protected images without authorisation; it used copyrighted works to train its AI model; used Getty trademarked watermarks within generated images constituted trademark infringement; and made model weights - at its most simplistic, the way in which an AI model 'learns' to improve during training - available for download, resulting in secondary copyright infringement. Getty's claims fell into two broad categories: Primary infringement by Stability AI through allegedly downloading, storing, reproducing and modifying Getty's copyrighted works to train its model, then passing this material into the public domain via open-source download. This claim was dropped by Getty pre-trial on the basis that it was unable to prove that Stability AI had conducted its supposedly unauthorised copying in the UK itself and this under the reach of the CDPA. So by the time things came to court, everything hinged on an allegation of secondary copyright infringement as a result of StabilityAI bringing the Stable Diffusion model into the UK which Getty alleged showed that the firm was knowingly handling and passing on unauthorised copies of copyrighted material. For its part, Stability AI argued that sourcing and training of Stable Diffusion and storage of resulting materials took place geographically outside of the UK and therefore outside the scope of the CDPA; any copyright infringement in the outputs stemmed from user actions, not Stability AI; Stable Diffusion itself isn't an infringing copy or article; and that Getty entities do not qualify for protection and any extraction took place outside the UK. So, who won? To which the answer might be no-one did. Or both sides did, as they each sort of claim. In reality, the biggest loser was any hope of some kind of clarity of precedent that might prove helpful next time this sort of situation ends up in court. Or as the Judge in the case, Mrs Justice Joanna Smith, admitted: The findings are both historic and extremely limited in scope. At the end of November, Smith handed down a 205-page ruling that centered on a central thesis of whether an AI model that stems from a training process that sees model weights being exposed to copyright-infringing content is itself a case of deliberate infringement of that copyright. On this point, she concluded it is not. The ruling found that to qualify as infringing copy there has to be a copy, ie an explicit reproduction of the copyrighted work. Model weights at various iterations of the training process do not the visual information in copyrighted works, decided Smith. Stability AI was found liable for limited trademark infringement related to the inclusion of Getty's watermarks in early versions of Stable Diffusion. So, what are the precedents, if any, set here? The ruling does conclude that intangible AI models are as subject to copyright dispute as tangible goods. But it also makes clear that use of copyrighted material from UK firms to train AI models outside of the UK is outside of the reach of current UK legislation. Both of these were pretty clear before this case anyway, but the ruling does re-inforce the perceived inadequacy of UK law to deal with vendors from a largely US and China-led industry pillaging content from UK sources with impunity if they choose to do so. As a useful blog posting from copyright expert Cerys Wyn Davies of law firm Pinset Masons notes: If an AI developer uses a training process which does not involve the tool itself storing or reproducing the data on which it was trained, then the developer will be able to circumvent copyright protection in the UK unless the copying for the purposes of the training takes place in the UK. This does not preclude copyright infringement actions in other jurisdictions where copying has taken place for the purposes of training, pursuant to international copyright conventions. As to the two parties in this particular legal skirmish, both are publicly making the case for vindication. In a statement, Stability AI's general counsel, Christian Dowell, said: Getty's decision to voluntarily dismiss most of its copyright claims at the conclusion of trial testimony left only a subset of claims before the court, and this final ruling ultimately resolves the copyright concerns that were the core issue. We are grateful for the time and effort the court has put forth to resolve the important questions in this case. For its part Getty said after the judgement: Today's ruling confirms that Stable Diffusion's inclusion of Getty Images' trademarks in AI‑generated outputs infringed those trademarks. Crucially, the Court rejected Stability AI's attempt to hold the user responsible for that infringement, confirming that responsibility for the presence of such trademarks lies with the model provider, who has control over the images used to train the model. This is a significant win for intellectual property owners. The ruling delivered another key finding; that, wherever the training and development did take place, Getty Images' copyright‑protected works were used to train Stable Diffusion. But it added: Beyond the specifics of the decision, we remain deeply concerned that even well-resourced companies such as Getty Images face significant challenges in protecting their creative works given the lack of transparency requirements. We invested millions of pounds to reach this point with only one provider that we need to continue to pursue in another venue. We urge governments, including the UK, to establish stronger transparency rules which are essential to prevent costly legal battles and to allow creators to protect their rights. The case's outcome was picked up Getty Images CEO Craig Peters this week when he commented: [The Judge] ruled in favor of getting images on our trademark infringement claim, confirming that inclusion of our trademarks and AI-generated outputs infringe those trademarks and that the responsibility for infringing output rest with stability versus the end user. This is a win for rights holders everywhere. While we are unsuccessful on the secondary infringement claim and dropped the training claim ahead of trial due to lack of clarity on the location of such training, the ruling affirmed Getty Images copyright-protected works were used to train Stable Diffusion. He added that Getty will be taking forward these findings of fact into its ongoing US case against Stability AI which has now been re-filed in California due to delays in Delaware where it was originally launched. The court is now reviewing motions and Getty is also evaluating an appeal in the UK. Peters made a point of emphasizing that Getty is open to working with partners around AI in ways that align with its more traditional content licensing practices. He noted that the firm has signed a number of deals to allow AI LLMs to use content, most recently a multi-year deal with Perplexity: It is a licensing deal, very similar to other licensing deals that we've done traditionally with technology platforms that leverage our content within our product offering...Given the volume of these Large Language Models and the investments going in, we think that could be something that could develop into a material revenue stream for the company. He concluded that the deals done to date set a useful practical precedent: In each instance, Getty Images is doing what it has always done so well, providing high-quality content to customers to enhance their offerings at scale and on an economic basis. We see more opportunity here....we continue to do some level of data licensing for AI training to our third-party platforms, and that continues. I'm not sure we're much further forward here in terms of the wider copyright debate, other than another reinforcement of the inadequacies of existing regulations, as Chris Middleton has ably documented passim. But, of course, this is a problem that is only likely to get worse. According to new research commissioned by CSC, a provider of business administration and compliance solutions, some 91% of senior legal professionals polled are concerned about online IP threats, with 85% reporting an increase in infringement activity over the past year and 90% expecting even more in the year ahead - and, yes, AI is the main driver here. The 2025 IP Frontiers Report surveyed 300 senior legal, compliance, and IP professionals across Europe, Asia Pacific, and North America, 88% of whom said AI-enabled systems are accelerating infringement and 93% expressed concern that AI-generated fake assets will cause real harm to their businesses. Right now, it seems to be left to content creators and owners to make appeals for good behavior by AI providers or to tool up and get tough in the courts, a process that could take years and years to reach no useful conclusion at all. Into the former camp this week we can add the Wikipedia Foundation which publicly appealed to AI firms to recognize the importance of human intelligence alongside the artificial variety: That's why Wikipedia is one of the highest-quality datasets in the world for training AI, and when AI developers try to omit it, the resulting answers are significantly less accurate, less diverse, and less verifiable. That's also why we are calling on AI developers and other content re-users who access our content to use it responsibly and sustain Wikipedia. They can accomplish this through two straightforward actions: attribution and financial support. Attribution means that generative AI gives credit to the human contributions that it uses to create its outputs. This maintains a virtuous cycle that continues those human contributions that create the training data that these new technologies rely on. For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources. With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work. Financial support means that most AI developers should properly access Wikipedia's content through the Wikimedia Enterprise platform. Developed by the Wikimedia Foundation, this paid-for opt-in product allows companies to use Wikipedia content at scale and sustainably without severely taxing Wikipedia's servers, while also enabling them to support our nonprofit mission. Through proper attribution of information sources and better financial support for AI's technological impacts on Wikipedia, AI developers can secure both their own long-term futures and Wikipedia's. That's the nice approach. Or you can lay down 'the law' as News Corp Robert Thomson does when he declares: Content crime does not and will not pay. One notable misconception is the value of IP in the age of AI. Information and sophisticated data are the essence of AI. And without these essential ingredients, AI is but empty, vacuous, ignorant infrastructure, electricity without alacrity, buildings without billings, chips without [ chops ]. Fresh off the back of a $1.5 billion award against Anthropic for its use of pirated books, Thomson adds: It is fair to say this will not be the last case of its kind, given the proliferation of piracy and increased scrutiny of shameless scraping by these epigonic enterprises. We would obviously prefer to partner and to limit lawyers' fees, but let me be absolutely clear to every Large Language Model - however, large, however small, if you have received stolen goods, we intend to pursue you relentlessly. You may not have done the actual stealing, but receiving stolen property is an offense in legal jurisdictions around the world. At this point, I'd like to say onwards! But perhaps, round and round in circles might be more appropriate for now...
[2]
What does the landmark Getty vs Stability AI court ruling really mean?
Legal expert reveals the impact on AI developers, creatives and regulators. Last week, the English High Court handed down its landmark decision in Getty Images v Stability AI. This judgment sought to grapple with some of the growing tensions between established intellectual property laws in the UK and the rapid evolution of generative AI technologies. It drew close attention from the creative industry, AI developers and regulators, as it was expected to shape future approaches to AI governance across the creative and technology sectors. However, although the decision may appear to have provided some clarity on the interplay between AI and IP infringement, in reality there remains significant uncertainty. Getty alleged that Stability used over 12 million Getty-owned or licensed photographs, without Getty's permission, to train and develop its image-generating AI model, Stable Diffusion. Getty argued that this undermined both the rights of its contributing photographers and the long-established licensing framework for the creative content industry and, in turn, constituted copyright infringement and trade mark infringement. Stability denied these allegations arguing that the Stable Diffusion model only learned from Getty images included in its training data, identifying patterns, rather than reproducing the works themselves. The judgment delivered mixed results. Getty's primary copyright claim (that the use of the Getty images to train Stable Diffusion infringed Getty's copyright in those images) was not addressed after the Court found that the alleged Stable Diffusion model training occurred on servers outside the UK and therefore fell outside the scope of UK copyright law. As such, the decision did not resolve the key question of whether training a generative AI model on copyrighted material, within the UK, without authorisation, would constitute copyright infringement. AI developers and rights-holders should therefore treat the ruling as only partial guidance, with significant uncertainty remaining around the boundary between innovation and infringement. Getty also argued that Stability had imported (primarily via downloads), an "article" (the Stable Diffusion model), knowing, or having reason to believe, that it was an infringing copy of Getty's images. The Court found that although an "article" extends to an electronic copy stored in an intangible form, there had nevertheless been no infringement as Stable Diffusion (or its model weights) never contained or stored a copy of any of the Getty images. Getty did, however, succeed on limited trade mark grounds. The Court found that earlier versions of Stable Diffusion had generated a number of images bearing the 'Getty Images' and 'iStock' watermarks which amounted to partial infringement. Although damages arising from such trade mark infringement finding are likely to be nominal, the Court's findings underscore the need for AI developers to implement appropriate safeguards to minimise the risk of reproducing protected trade marks in AI generated content. For now, the ruling offers some leeway for AI developers training generative-AI systems on copyrighted materials outside of the UK. That said, the primary copyright infringement claim was largely constrained by jurisdictional limitations and did not provide a definitive ruling on the issue of training and developing AI models on copyrighted works within the UK. Superficially, the judgment appears to present a win for the AI community, but arguably leaves the legal waters of copyright and AI training as murky as before. AI developers should continue to implement best practices to minimise intellectual property infringement and enforcement risk, including: For creators and rights-holders, the decision is more frustrating. While the limited trade mark success was welcomed, the dismissal of the copyright claims underscored the constrains of the UK's existing intellectual property framework within the context of generative AI. Without statutory reform, enforcing rights over copyrighted data used to train AI models, particularly where training occurs outside the UK, will remain challenging. For regulators, the judgment exposed gaps in existing intellectual property law in the UK, including questions of territorial reach. The UK Government may face mounting pressure to strengthen transparency obligations for large-scale AI and clarify jurisdictional boundaries for globally trained systems. In its press release, Getty called for stronger transparency measures to reduce costly disputes and better safeguard creators' rights. The case underscores the ongoing tension between maintaining UK AI innovation and ensuring meaningful protection for creators. Looking ahead, the UK Government's ongoing consultations with expert groups from both the creative and technology sectors will be closely observed. These discussions will seek to balance the protection of human creativity against the promotion of AI innovation. Further developments through regulatory guidance or legislative appeal will also be expected.
Share
Share
Copy Link
A landmark UK court ruling in Getty Images' copyright case against Stability AI provides limited clarity on AI training rights, highlighting jurisdictional gaps in current intellectual property law while offering mixed results for both parties.
The English High Court has delivered its highly anticipated ruling in Getty Images v Stability AI, a case that many hoped would provide definitive guidance on the intersection of artificial intelligence and copyright law. However, the 205-page judgment handed down by Mrs Justice Joanna Smith in late November has left the legal landscape as murky as before, with both sides claiming partial victory in what has become a pyrrhic legal battle
1
.Getty Images initiated legal proceedings in January 2023, becoming the first major media company to challenge an AI firm in UK courts over alleged unauthorized use of copyrighted content. The case centered on allegations that Stability AI's Stable Diffusion model, which generates AI images from user prompts, had been trained on approximately 12.3 million visual assets from Getty Images and other publicly accessible websites without permission
1
.
Source: Creative Bloq
The most significant aspect of the ruling was what it failed to address rather than what it resolved. Getty's primary copyright infringement claim was dismissed not on its merits, but due to jurisdictional constraints. The court found that the alleged training of Stable Diffusion occurred on servers outside the UK, placing it beyond the reach of the Copyright, Designs and Patents Act 1988 (CDPA)
2
.This jurisdictional limitation meant that the court never addressed the fundamental question that the creative industry was hoping to see resolved: whether training a generative AI model on copyrighted material within the UK, without authorization, constitutes copyright infringement. As Justice Smith acknowledged, "The findings are both historic and extremely limited in scope"
1
.With the primary claim dismissed, the case hinged on allegations of secondary copyright infringement. Getty argued that Stability AI had imported an "article" - the Stable Diffusion model itself - knowing it contained infringing copies of Getty's images. However, the court ruled that while an "article" can include electronic copies stored in intangible form, Stable Diffusion's model weights never actually contained or stored copies of the Getty images
2
.The ruling established that for something to qualify as an infringing copy, there must be an explicit reproduction of the copyrighted work. Model weights, which represent the way an AI model learns during training, do not contain visual information from copyrighted works in a form that constitutes reproduction
1
.Getty did achieve a partial victory on trademark infringement claims. The court found that earlier versions of Stable Diffusion had generated images bearing Getty Images and iStock watermarks, constituting trademark infringement. However, this success was limited in scope, with damages likely to be nominal
2
.The trademark finding does underscore the need for AI developers to implement appropriate safeguards to minimize the risk of reproducing protected trademarks in AI-generated content, but it falls far short of the comprehensive legal framework that content creators had hoped the case would establish.
Related Stories
For AI developers, the ruling provides some temporary relief, particularly for those training models outside the UK. However, legal experts warn that this apparent victory may be superficial. As copyright expert Cerys Wyn Davies noted, the decision highlights how AI developers can potentially circumvent UK copyright protection by ensuring that copying for training purposes takes place outside UK jurisdiction
1
.For content creators and rights holders, the decision represents a frustrating setback. The dismissal of copyright claims underscores the constraints of the UK's existing intellectual property framework when applied to generative AI technologies. Without statutory reform, enforcing rights over copyrighted data used to train AI models will remain challenging, particularly where training occurs outside the UK
2
.The judgment has exposed significant gaps in existing UK intellectual property law, particularly regarding territorial reach and jurisdictional boundaries for globally trained AI systems. In response to the ruling, Getty called for stronger transparency measures to reduce costly disputes and better safeguard creators' rights
2
.The UK Government now faces mounting pressure to strengthen transparency obligations for large-scale AI development and clarify jurisdictional boundaries. Ongoing consultations with expert groups from both creative and technology sectors will be closely watched as policymakers attempt to balance the protection of human creativity against the promotion of AI innovation
2
.Summarized by
Navi
[1]
04 Nov 2025•Policy and Regulation

26 Jun 2025•Technology

09 Jun 2025•Policy and Regulation

1
Technology

2
Business and Economy

3
Business and Economy
