The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 17 Dec, 4:02 PM UTC
15 Sources
[1]
UK Photographers Reject Government Plan to Allow AI Companies to Train on Their Data
U.K. photographers have rejected a move by the British government that would allow tech companies to train AI tools with their work. The proposal would have allowed tech firms like OpenAI, Google, and Meta to train on the published works of creatives unless the copyright holder actively opted out. The U.K.-based Association of Photographers (AOP) joined the Creative Rights in AI Coalition (CRAC) which also includes Getty Images, the Motion Picture Association, the Professional Publishers Association, and a number of other bodies representing different creative industries. "Whilst members are still digesting the details of the consultation, rights holders do not support the new exception to copyright proposed. In fact, rights holders consider that the priority should be to ensure that current copyright laws are respected and enforceable," CRAC says in a statement. "The only way to guarantee creative control and spur a dynamic licensing and generative AI market is for the onus to be on generative AI developers to seek permission and engage with rights holders to agree licences. "We welcome proposals for transparency measures which will allow rights holders to understand how their work has been used but these should be implemented to make existing copyright law enforceable, rather than being offered as a 'trade off' for the degradation of copyright protections." The Guardian reports that the coalition of creatives started after Britain's technology and culture minister Chris Bryant put forward a system that would "improve access to content by AI developers, whilst allowing rights holders to control how their content is used for AI training". Bryant argued in parliament that there is a danger international developers will train their models on U.K. content from overseas if the country is to adopt a "regime based on proactive, explicit permission". "This could significantly disadvantage sectors across our economy, including the creative industries, and sweep the rug from underneath British AI developers," Bryan says. However, creative industries disagree. During a debate in parliament's upper chamber, the House of Lords, one member says the government's plan is like asking shopkeepers to "opt out of shoplifters." "I struggle to think of another situation where someone protected by law must proactively wrap it around themselves on an individual basis," says House of Lords member Beeban Kidron.
[2]
UK coalition of creatives reject government plan on AI copyright exemption - SiliconANGLE
UK coalition of creatives reject government plan on AI copyright exemption A coalition of publishers, news media, musicians, photographers, and film producers has rejected a proposal issued earlier this week by the U.K. government to allow artificial intelligence companies to train their large language models on their content if they don't opt out. Since the rise of generative AI, tech companies have found themselves in legal tangles with the artists who create the work on which AI products are trained. While there's no easy fix, the U.K. has mulled over an opt-out model in which the creatives must take the initiative to ask companies to keep their hands off their work. This has largely been dismissed by the creatives, who, it seems, would prefer an "opt-in" model. This week the Labor government introduced a set of proposals in an effort to placate the artists who feel their copyright is being infringed while still giving AI developers "wide access to material to train world-leading models." The proposal was based on an opt-out model that would create a loophole in which AI companies should skirt copyright laws. Today, the Creative Rights in AI Coalition said the onus should not be on the owner to opt out but on the AI to seek permission and ink a licensing deal. The coalition, a list of heavyweights, includes the Guardian, Financial Times, Telegraph, the Daily Mail Group, and Getty Images, and also the Motion Picture Association, the Society of Authors, the British Phonographic Industry, and the Independent Society of Musicians. "Rights holders do not support the new exception to copyright proposed," the group said in a statement, which was shared with The Guardian. "In fact, rights holders consider that the priority should be to ensure that current copyright laws are respected and enforceable. The only way to guarantee creative control and spur a dynamic licensing - and generative AI - market is for the onus to be on generative AI developers to seek permission and engage with rights holders to agree licenses." In a recent poll in the U.K., 72% of respondents said AI companies should have to pay royalties to the creators of content that has been used to train AI, while 80% said those same companies should make it clear just what content they've used.
[3]
UK coalition of creatives rejects government plan on AI copyright exemption - SiliconANGLE
UK coalition of creatives rejects government plan on AI copyright exemption A coalition of publishers, news media, musicians, photographers and film producers has rejected a proposal issued earlier this week by the U.K. government to allow artificial intelligence companies to train their large language models on their content if they don't opt out. Since the rise of generative AI, tech companies have found themselves in legal tangles with the artists who create the work on which AI products are trained. While there's no easy fix, the U.K. has mulled over an opt-out model in which the creatives must take the initiative to ask companies to keep their hands off their work. This has largely been dismissed by the creatives, who prefer an "opt-in" model. This week the Labor government introduced a set of proposals in an effort to placate the artists who feel their copyright is being infringed while still giving AI developers "wide access to material to train world-leading models." The proposal was based on an opt-out model that would create a loophole in which AI companies should skirt copyright laws. Today, the Creative Rights in AI Coalition said the onus should not be on the owner to opt out but on the AI to seek permission and ink a licensing deal. The coalition, a list of heavyweights, includes the Guardian, Financial Times, Telegraph, the Daily Mail Group and Getty Images, and also the Motion Picture Association, the Society of Authors, the British Phonographic Industry and the Independent Society of Musicians. "Rights holders do not support the new exception to copyright proposed," the group said in a statement, which was shared with The Guardian. "In fact, rights holders consider that the priority should be to ensure that current copyright laws are respected and enforceable. The only way to guarantee creative control and spur a dynamic licensing -- and generative AI -- market is for the onus to be on generative AI developers to seek permission and engage with rights holders to agree licenses." In a recent poll in the U.K., 72% of respondents said AI companies should have to pay royalties to the creators of content that has been used to train AI, while 80% said those same companies should make it clear just what content they've used.
[4]
UK arts and media reject plan to let AI firms use copyrighted material
Writers, publishers, musicians, photographers, movie producers and newspapers have rejected the Labour government's plan to create a copyright exemption to help artificial intelligence companies train their algorithms. In a joint statement, bodies representing thousands of creatives rejected the proposal made by ministers on Tuesday that would allow companies such as Open AI, Google and Meta to train their AI systems on published works unless their owners actively opt out. The Creative Rights in AI Coalition (Crac) said existing copyright laws must be respected and enforced rather than degraded. The coalition includes the British Phonographic Industry, the Independent Society of Musicians, the Motion Picture Association and the Society of Authors as well as Mumsnet, the Guardian, Financial Times, Telegraph, Getty Images, the Daily Mail Group and Newsquest. Their intervention comes a day after the technology and culture minister Chris Bryant told parliament the proposed system, subject to a 10-week consultation, would "improve access to content by AI developers, whilst allowing rights holders to control how their content is used for AI training". Tech UK, an industry lobby group, has called for a "more open" market to enable firms to use copyrighted data and make payments. The Conservative chair of the Commons culture, media and sport select committee, Caroline Dinenage, alleged the government had "fully drunk the Kool-Aid on AI". But Bryant told MPs: "If we were to adopt a too tight a regime based on proactive, explicit permission, the danger is that international developers would continue to train their models using UK content accessed overseas, but may not be able to deploy them in the UK ... this could significantly disadvantage sectors across our economy, including the creative industries, sweep the rug from underneath British AI developers." The creative industries want the onus to be on generative AI developers to seek permission, agree licences and pay rights holders if they want to train algorithms with the power to write, make moving images, pictures and music. The joint statement from the creative industries, shared with the Guardian, said: "Rights holders do not support the new exception to copyright proposed. In fact, rights holders consider that the priority should be to ensure that current copyright laws are respected and enforceable. The only way to guarantee creative control and spur a dynamic licensing - and generative AI - market is for the onus to be on generative AI developers to seek permission and engage with rights holders to agree licences." Last week, Paul McCartney and Kate Bush became the latest high profile British creatives to call for curbs on AI companies engaging in copyright theft. They joined the actors Julianne Moore, Stephen Fry and Hugh Bonneville in signing a petition, now backed by over 37,500 people, which states the "unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted". The novelist Kate Mosse has backed a parallel campaign for amendments to the data bill that would allow the enforcement of the UK's existing copyright law, thereby allowing creators to negotiate for fair payment when licensing their content. In a House of Lords debate on those amendments this week, their proposer, Beeban Kidron, compared the government's suggested system to asking shopkeepers to "opt out of shoplifters" and said: "I struggle to think of another situation where someone protected by law must proactively wrap it around themselves on an individual basis." Lord Clement Jones, Lib Dem spokesperson on the digital economy, said the government's proposed copyright exemption was "based on the mistaken idea, promoted by tech lobbyists and echoed in the consultation, that there is a lack of clarity in existing copyright law." The science minister, Lord Vallance, said the government wanted to "support rights-holders to continue to exercise control over the use of their content and their ability to seek remuneration for this" and "support the development of world-leading AI models in the UK by ensuring that access can be appropriately wide".
[5]
UK's AI Copyright Proposal Draws Criticism -- Creatives Warn of Industry Risks
Critics argue that requiring content creators to opt out of AI training may fail to prevent the unapproved use of copyrighted works. Amid ongoing disputes over the use of copyrighted material to train AI, the U.K. government has proposed amending intellectual property law in the country to clarify the rules. Under the proposal, data mining for AI training would be exempt from copyright obligations unless rights holders specify otherwise. Despite the government's claim that such an arrangement is in the best interests of different stakeholders, publishers and creatives have voiced concerns about the implications of a default copyright exemption. UK Explores Options for Copyright Law Citing the current uncertainty surrounding AI copyright law, the government has opened a consultation to "find the right balance" between encouraging AI innovation and protecting the interests of creative industries. The government outlined three different policy options. The first option would require AI developers to gain an express license from copyright owners to use their content. The second would provide a broad "data mining exemption" that permits AI training without restrictions. The third option, preferred by the government, would let AI developers train models on copyrighted materials unless rights holders opt out. The government stated that this "would mean that AI developers are able to train on large volumes of web-based material without risk of infringement." "Importantly, right holders are also able to control the use of their works using effective and accessible technologies and seek payment through licensing agreements," the notice read. In theory, the government's copyright proposal should empower creators to withhold their content from AI training libraries. However, critics such as the BBC oppose the government's plans. They argue that the opt-out model still favors AI developers and that inclusion in training datasets should be based on commercial licensing agreements. Criticism of the Government's Approach Owen Meredith, chief executive of the News Media Association, said the government's position "fails to address the real issue." "News publishers deserve control over when and how their content is used and, crucially, fair remuneration for its use," she stressed. "Instead of proposing unworkable systems such as the 'rights reservations' (or opt-out') regime, the government should focus on implementing transparency requirements within the existing copyright framework. Only this will ensure that creatives and the [generative AI] firms that rely on them for high-quality data can thrive together." Opt Out vs. Opt In Despite the concerns of some copyright advocates, platform operators increasingly require users to specifically opt out of having their data used to train AI. This creates challenges for intellectual property owners in a digital ecosystem where the same content may be hosted in multiple locations, often without rights holders' knowledge. Compared to a system where express consent is required for AI training, granting AI developers default permission would make it more difficult to retain control over how content is used.
[6]
UK proposes letting tech firms use copyrighted work to train AI
Consultation suggests opt-out scheme for creatives who don't want their work used by Google, OpenAI and others Tech companies will be allowed to freely use copyrighted material to train artificial intelligence models unless creative professionals and companies opt out of the process, under UK government proposals. The proposed changes are seeking to resolve a standoff between AI firms and creatives. Sir Paul McCartney has warned the technology "could just take over" without new laws. A government consultation is proposing an exception to UK copyright law - which prevents use of someone's work without permission - that will allow companies such as Google and the ChatGPT developer OpenAI to train their models on copyrighted content. However, it will also allow writers, artists and composers to "reserve their rights", which involves declaring that they do not want their work to be used in an AI training process - or to demand a licence fee to do so. Chris Bryant MP, the data protection minister, said the proposal was a "win win" for two sides that have been at loggerheads over a new copyright regime. "We're absolutely clear that this is about giving greater control in a difficult and complex set of circumstances to creators and rights holders, and we intend it to lead to more licensing of content, which is potentially a new revenue stream for creators," he said. The British composer Ed Newton-Rex, a key figure in the campaign by creative professionals for a fair deal, told the Guardian in October that opt-out schemes were "totally unfair" for creators. Newton-Rex has organised a statement signed by more than 37,000 creative professionals, including the Radiohead singer Thom Yorke and the actor Julianne Moore, that says unlicensed use of creative work for AI model training is a "major, unjust" threat to creators' livelihoods. The consultation also raises the prospect of AI developers being required to outline what content they have used to train their models, which would give rights-holders a better grasp of when and how their content has been used. The government said the new measures would have to be accessible and effective before being adopted, or else they will not be introduced. "These measures would be fundamental to the effectiveness of any exception, and we would not introduce an exception without them," said the Department for Science, Innovation and Technology. Whether the new regime covers models already being deployed in the market, such as the current ones being deployed in ChatGPT and Google's Gemini, is also an issue on which the government is seeking views. The consultation will seek views on whether there is a need for a US-style "right of personality" that will protect celebrities from having their voice or likeness replicated by AI without permission. The Hollywood actor Scarlett Johansson clashed with OpenAI last year when it previewed a voice assistant that sounded very similar to her own distinctive speech. OpenAI paused the feature after users noted its similarity to Johansson's voice.
[7]
UK floats letting AI train on copyright works in industry consultation
The United Kingdom's government has launched a consultation with the artificial intelligence and creative industries on possible legal frameworks for AI models to train on copyrighted material. In the Dec. 17 proposals, the UK government has floated a range of possible policies for both the AI and creative industries to provide feedback on up until Feb. 25, 2025. Secretary of State for Science, Innovation and Technology Peter Kyle said in a statement that uncertainty about how copyright law applies to AI is holding back both sectors from reaching their full potential. "It's clear that our current AI and copyright framework does not support either our creative industries or our AI sectors to compete on the global stage," he said. The UK's consultation comes as many AI companies face backlash over accusations that they've stolen intellectual property to help train their AI models. One of the four policy options floated by the UK was to let AI companies use copyrighted material without the right holders' permission and permit commercial use for any purpose subject to few or no restrictions. Another option allows companies to freely use copyrighted material to train AI models unless creative professionals and companies opt out. Related: Meta to resume AI training in UK after regulatory pause Another involves strengthening copyright laws, requiring licensing in all cases so that companies can only train AI models with work for which they have a license and expressed permission. The consultation also asks if the government should keep the laws as they are, but the agencies behind the consultation acknowledge that this would result in "the current lack of clarity" for copyright holders and AI developers. Creatives slam proposals Ed Newton-Rex, a British composer and CEO of Fairly Trained, a nonprofit that certifies AI companies who get a license for their training data, said he thinks the changes to copyright laws would only benefit AI companies and "cause huge, irreversible harm to creators." Newton-Rex also argued some of the changes are misleading because a copyright exception would make "it legal to train on copyrighted work without a license, where it's currently illegal." Meanwhile, Owen Meredith, the chief executive of the News Media Association, said in a Dec. 17 statement that the government's consultation fails to address the real issue, which is a need for robust enforcement and transparency requirements to protect the rights of creatives. "At present, there is no lack of clarity in the law, but these proposals will only muddy the water and allow GAI firms to shirk their responsibilities," he said. "Instead of proposing unworkable systems such as the 'rights reservations' or opt-out' regime, the government should focus on implementing transparency requirements within the existing copyright framework," Meredith added.
[8]
UK tries to provide 'clarity' over copyrighted data usage by AI firms
The UK proposes introducing an exception to copyright law for AI training for commercial purposes. In a 10-week consultation launched yesterday (17 December), the UK government seeks to bring clarity over how copyright-protected material can be used to train AI models. The proposal was launched in a bid to "drive growth" in both the country's creative industries as well as the AI sector, said the government, by ensuring protection and payment for rights holders while supporting AI developers to "innovate responsibly". According to the UK Intellectual Property Office, the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport, the consultation will focus on boosting trust and transparency between the sectors, exploring how copyrighted material can be licensed, how data-owners can be remunerated and whilst strengthening access to "high-quality data" for AI developers. "Uncertainty about how copyright law applies to AI is holding back both sectors," said the government departments, creating difficulty for data-owners who seek payment for use of their work while creating legal risks for AI developers. In order to address this, the consultation proposes an exception to copyright law for AI training for commercial purposes while allowing owners to reserve their rights. This, the departments said would give data-owners more certainty and control, and support them in striking licensing deals with AI companies. Meanwhile, this would also give AI developers more certainty about what material they can and cannot use. Moreover, the consultation also proposes new requirements for AI developers to be transparent, requiring them to provide more information about what content they use to train their models. However, this move has brought some criticism from book publishers and news organisations. Dan Conway, the CEO of Publishers Association said the proposed measures "are as yet untested and unevidenced. "There has been no objective case made for a new copyright exception, nor has a water-tight rights-reservation process been outlined anywhere around the world." Owen Meredith, the chief executive of News Media Association said that the government's consultation along with its preferred policy "fails to address the real issue. "News publishers deserve control over when and how their content is used and, crucially, fair remuneration for its use. Instead of proposing unworkable systems such as the 'rights reservations' (or opt-out') regime, the government should focus on implementing transparency requirements within the existing copyright framework." Meanwhile, senior lawyer with Pinsent Masons, Gill Dennis said: "Putting the onus of action on content creators to opt out is highly controversial and, as the government itself acknowledges, faces significant current technical barriers for giving effect to in practice." AI innovators like OpenAI have an opaque policy around what data it uses to train its models. Although this technology uses pre-existing copyrighted data, large language models often do not regurgitate the data it processes, making it difficult for copyright holders to prove infringement. Last month, a number of Canadian news publishers launched a lawsuit against OpenAI for copyright infringement, demanding damages ranging in billions. Meanwhile, the New York Times has been engaged in evidence gathering for its own legal battle against the company, claiming that ChatGPT is trained on millions of articles published by the outlet. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[9]
UK consults on opt-out model for training AIs on copyrighted content | TechCrunch
The U.K. government is consulting on an opt-out copyright regime for AI training that would require rights holders to take active steps if they don't want their intellectual property to become free AI training fodder. The rise of generative AI models that are trained on vast quantities of data has brought intellectual property concerns to the forefront, with many creatives up in arms that their work is being processed without permission (or compensation) to train AI technologies that can churn out competing output -- whether text, visuals or audio, or a combination of all three. The visual arts, music, film production, and video games all look to be viable targets for GenAI, which replaces traditional (skilled human) production processes with highly scalable AI tools that rely on a system of prompting to trigger models to instantly generate output that's based on statistical analysis of information patterns in their training data. With global attention fixed on large language models (LLMs) such as OpenAI's GPT, which underpins the popular ChatGPT chatbot, the days of AI startups quietly scraping the web to grab free training data for model developing without anyone noticing or caring are over. Deals are being struck by AI companies to license certain types of content to use for training data. At the same time, a growing number of lawsuits are challenging unlicensed used of IP for AI training. The situation demands legal clarity, at the least, and that's what the U.K. government says it hopes this consultation will help deliver as lawmakers consider how they may shape policy in this (fraught) area. Future U.K. policymaking could include legislation "to provide legal certainty," although the government says it has yet to decide on that. For now, ministers are seeking to tread a line between claiming the government wants to support the U.K.'s creative sector and a stated desire to boost AI investment and uptake. But this framing looks like an attempt to fudge a position that favors the AI industry. "Both our creative industries and our AI sector are UK strengths. They are vital to our national mission to grow the economy. This consultation sets out our plan to deliver a copyright and AI framework that rewards human creativity, incentivises innovation and provides the legal certainty required for long-term growth in both sectors," the government wrote in a ministerial foreword to the consultation. There's no doubt that setting up an opt-out regime for use of IP for AI training would put the burden on creatives to act to protect their works -- a situation that could disproportionately disadvantage smaller creatives compared to larger rights holders. So the approach is unlikely to be universally, or even widely, popular with the creative sector. Whereas, AI companies have been actively lobbying for such an opt-out regime. "The proposals include a mechanism for right holders to reserve their rights, enabling them to license and be paid for the use of their work in AI training. Alongside this, we propose an exception to support use at scale of a wide range of material by AI developers where rights have not been reserved," the government continued. "This approach would balance right holders' ability to seek remuneration while providing a clear legal basis for AI training with copyright material, so that developers can train leading models in the UK while respecting the rights of right holders." The government goes on to state that its "key objectives" for both the creative and AI industries include "promoting greater trust and transparency between the sectors". And its stated goals of supporting rights holders' control of their content and ability to be remunerated for its use and the development of "world-leading AI models in the UK by ensuring wide and lawful access to high-quality data" will clearly require some fancy-footwork if the end-result doesn't end up downgrading the interests of one sector over the other. As it stands, the AI industry appears to be getting the better deal from the Labour government so far. That said, ministers stress that whatever "package of interventions" the government ends up presenting must tackle the AI industry's lack of transparency. So while it frames the proposed opt-out regime as "balanced", it also states explicitly that "greater transparency from AI developers is a prerequisite" for the approach to work. Specifically, the government says this means "transparency about the material they use to train models, how they acquire it, and about the content generated by their models", adding: "This is vital to strengthen trust, and we are seeking views on how best to deliver it." Another component it emphasizes as necessary for an opt-out regime to work is the development of "simple technical means for creators to exercise their rights, either individually or collectively." "This will require both the AI companies and creative industries to come together to create new technical systems to deliver the desired outcome of greater control and licensing of IP," it also suggested. "This approach aims to protect the interests of our creative industries and AI sectors. But successfully delivering it is not straightforward. It will require practical and technical solutions as well as good policy. We are open-eyed about this, but optimistic that we can succeed by working together -- across our departments and both sectors," the government added. The consultation runs for 10 weeks -- closing on February 25, 2025. Web submissions can be made via an online survey. "As AI evolves rapidly, the UK's response must adapt," the government also wrote, couching the consultation as "an opportunity for anyone with an interest in these issues to share their views and provide evidence regarding the economic impact of these proposals," and committing to run a program of "wider engagement activity" over the consultation period to "ensure that the full range of views is heard".
[10]
UK looks at forcing greater transparency on AI training models
Tech companies in the UK face being forced to open up their artificial intelligence models to greater scrutiny in an attempt to help the creative industries stop work being ripped off or reproduced without compensation. In a consultation announced on Tuesday, the UK government will offer an exemption to copyright laws, letting tech companies use material ranging from music and books to media and photos to train their AI models unless the rights holder objects under a so-called "rights reservation" system. The plans to open up copyrighted material for training purposes are likely to anger many in the creative industries, with executives warning that the UK is at risk of undermining one of the country's largest and most successful drivers of economic growth. Having in effect to "opt out" of the use of their work in AI models could be costly, difficult to monitor and time-consuming for artists and creatives, they argue. However, the consultation will also alarm parts of the tech sector, given the plans include AI firms having to be more transparent on the data they use to train models and on how the content they then generate is labelled. The UK government said on Tuesday that tech companies could be "required to provide more information about what content they have used to train their models . . . to enable rights holders to understand when and how their content has been used in training AI". Copyright holders would then be able use this information to more easily strike licensing deals under the plans. In an interview with the Financial Times, culture minister Sir Chris Bryant said the government would force through transparency over both AI input and output -- making clear what a model was trained on, and whether something was produced by AI. He argued that the system would need to be easy to use by the creative industries. "This can deliver for the creative industries if we get this right. All these parts are contingent on each other. We want to be able to deliver legal clarity and legal certainty because both sides say that doesn't exist at the moment," he said. Bryant added: "AI companies have said to us very, very clearly that they want to do more business in the UK, but they can't . . . they're just so nervous about the legal uncertainty. But it is a quid pro quo. They get that certainty, but only if they can create a system of rights reservation that genuinely works." Officials say the consultation will seek opinions on areas such as enforcement, which could include legislation or a regulator to oversee the sector, as well as on what technical systems are needed to make a rights reservation regime work. They argue that uncertainty about how copyright law functions can make it difficult for creators to control or seek payment for the use of their work, and creates legal risks for AI firms. Executives in creative industries have concerns about rights reservation given the risk that overseas AI companies will not disclose what material they are using, and not compensate copyright holders if they are discovered to have exploited work. A previous attempt to agree a voluntary AI copyright code of practice was unsuccessful this year, but Bryant is hopeful that the government can find a balance that benefits both sides. The government on Tuesday said "further work with both sectors would be needed to ensure any standards and requirements for rights reservation and transparency are effective, accessible, and widely adopted". It added: "These measures would be fundamental to the effectiveness of any exception, and we would not introduce an exception without them."
[11]
UK consults on rules for using copyrighted content to train AI models
On Dec. 9, OpenAI made its artificial intelligence video generation model Sora publicly available in the U.S. and other countries. The U.K. is drawing up measures to regulate the use of copyrighted content by tech companies to train their artificial intelligence models. The British government on Tuesday kicked off a consultation which aims to increase clarity for both the creative industries and AI developers when it comes to both how intellectual property is obtained and then used by AI firms for training purposes. Some artists and publishers are unhappy with the way their content is being scraped freely by companies like OpenAI and Google to train their large language models -- AI models trained on huge quantities of data to generate humanlike responses. Large language models are the foundational technology behind today's generative AI systems, including the likes of OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. Last year, The New York Times brought a lawsuit against Microsoft and OpenAI accusing the companies of infringing its copyright and abusing intellectual property to train large language models. In response, OpenAI disputed the NYT's allegations, stating that the use of open web data for training AI models should be considered "fair use" and that it provides an "opt-out" for rights holders "because it's the right thing to do." Separately, image distribution platform Getty Images sued another generative AI firm, Stability AI, in the U.K., accusing it of scraping millions of images from its websites without consent to train its Stable Diffusion AI model. Stability AI has disputed the suit, noting that the training and development of its model took place outside the U.K.
[12]
UK launches review on AI training using copyrighted content By Invezz
Invezz.com - The UK government has launched a consultation aimed at addressing the growing tension between AI developers and the creative industries over copyright use. The move seeks to provide clarity on how intellectual property (IP) is acquired and used by AI firms to train their models, ensuring a balance between technological innovation and creator rights. Amid growing legal battles, including lawsuits involving OpenAI and Stability AI, the UK is positioning itself as a leader in establishing clear copyright standards for artificial intelligence. The consultation explores multiple proposals to regulate the use of copyrighted material by AI firms. One key consideration is allowing exceptions to copyright law for AI model training, particularly for commercial purposes. The proposals ensure that rights holders retain control by reserving their rights. This provision would enable creators to dictate how their content is used, offering a path for fair licensing and remuneration. In addition to protecting content creators, the government aims to offer AI developers clarity on permissible materials for model training. The move addresses mounting concerns from artists, publishers, and content platforms whose work has been scraped to develop generative AI systems such as OpenAI's ChatGPT and Stability AI's Stable Diffusion. Another significant proposal focuses on increasing transparency regarding training datasets. AI firms could be required to disclose the origin and nature of datasets used to train their models, ensuring that creators understand when their work has been utilised. This measure directly addresses the concerns of intellectual property holders who argue that content is being used without consent or compensation. This proposal could ignite controversy. Tech companies remain guarded about revealing their dataset sources, citing commercial sensitivities and the risk of competitors replicating their models. Given the commercial value of generative AI, such transparency requirements could meet resistance from AI developers keen to protect their trade secrets. The consultation comes at a critical time when legal disputes over AI copyright usage are intensifying globally. Last year, The New York Times (NYSE:NYT) filed a lawsuit against Microsoft (NASDAQ:MSFT) and OpenAI for allegedly infringing copyright by using its content to train large language models. In the UK, Getty Images sued Stability AI for scraping millions of images without consent, a case that continues to raise questions about IP compliance for AI systems. The UK's efforts stand in contrast to the US, where tech lobbying often complicates legislative progress. Industry analysts believe the UK is better positioned to prioritise personal intellectual property rights. Under previous leadership, the government attempted a voluntary AI copyright code of practice but struggled to secure broad adoption. The proposed measures aim to balance the interests of creators and AI developers, fostering innovation while ensuring rights holders are compensated. To achieve this, the government is encouraging collaboration between the creative and tech industries to establish widely adopted standards for rights reservation and licensing. The growing shift towards "multimodal" AI, which integrates text, images, and video, adds urgency to the issue. Recent developments, such as OpenAI's AI video generation tool Sora, underscore the need for comprehensive regulations to address the evolving capabilities of AI systems. Sora's release highlights how AI models are expanding beyond text-based applications, amplifying the need for clarity in IP usage. The UK's consultation signals its intent to become a global leader in AI copyright governance. By prioritising transparency and creator compensation, the government aims to create a framework that supports technological growth while protecting intellectual property.
[13]
UK plans to favour AI firms over creators with a new copyright regime
One of the biggest uncertainties in the ongoing AI revolution is whether these systems can legally be trained on copyrighted data. Now, the UK says it plans to clarify the matter with a change to the law The UK government has announced plans to allow artificial intelligence models to be trained on copyrighted content, settling one of the big uncertainties of the current AI revolution - but the proposal has been criticised by campaigners who worry about the way AI companies already allegedly flout copyright rules. "There's nothing balanced about it," says Ed Newton-Rex, a musician and former executive at AI company Stability AI. "It will hand most of the UK's creative work to AI companies, for free, letting them build highly scalable competitors to...
[14]
UK Aims to Regulate AI's Use of Copyrighted Materials | PYMNTS.com
The British government is preparing to regulate the use of copyrighted material in training AI. The U.K. on Tuesday (Dec. 17) began a consultation designed to provide more clarity for creatives and tech companies on how intellectual property can be used to train artificial intelligence (AI) models. "Currently, uncertainty about how copyright law applies to AI is holding back both sectors from reaching their full potential," the government said in its announcement. "It can make it difficult for creators to control or seek payment for the use of their work, and creates legal risks for AI firms, stifling AI investment, innovation, and adoption. After previous attempts to agree a voluntary AI copyright code of practice proved unsuccessful, this government is determined to take proactive steps with our creative and AI sectors to deliver a workable solution." The consultation proposes an exception to copyright law for AI training for commercial purposes, while still letting rights holders hold their rights and control the use of their work. However, the government said additional work with both the AI and creative sector is necessary to make sure standards are effective. The consultation also calls for new transparency requirements for AI developers. "For example, AI developers could be required to provide more information about what content they have used to train their models," the announcement said. "This would enable rights holders to understand when and how their content has been used in training AI." These efforts come amid a host of legal fights over the use of copyrighted material by companies developing AI models. As PYMNTS wrote earlier this year, experts say the issue spotlights the pressing need for clearer guidelines and protections in the AI field. "AI presents unique copyright concerns for businesses, primarily because it can produce content that closely resembles or 'copies' human-generated content, such as articles, publications, images and music," Star Kashman, a cybersecurity and privacy lawyer, said in an interview with PYMNTS. "The use of AI-generated creations raises complex questions about ownership and copyright, as these creations often use datasets that include copyrighted works of art and may infringe upon these copyrights." Meanwhile, some companies have released tools designed to alleviate the problem, such as Adobe's new video creation offering. "This tool enables faster content creation and experimentation, all while ensuring that what is being produced is safe for commercial use," Robert Petrocelli, chief product and technology officer at video company Vimeo, told PYMNTS in October.
[15]
U.K. Launches Copyright Law Consultation to "Unlock Full Potential" of AI and Creative Sectors
The U.K. government of Prime Minister Keir Starmer on Tuesday launched a consultation on "plans to give certainty to the creative industries and AI developers on how copyright material can be used to train AI models." The consultation will run for 10 weeks, closing on Feb. 25, with the goal of helping to "drive growth across both sectors by ensuring protection and payment for rights holders and supporting AI developers to innovate responsibly." The governing Labour Party has described both the creative and AI sectors as "central to the government's Industrial Strategy, and these proposals aim to forge a new path forward which will allow both to flourish and drive growth." Key areas of the consultation include "boosting trust and transparency between the sectors, so rights holders have a better understanding of how AI developers are using their material and how it has been obtained." It also explores "how creators can license and be remunerated for the use of their material, and how wide access to high-quality data for AI developers can be strengthened to enable innovation across the U.K. AI sector," the government said. It argued that its proposals "will help unlock the full potential of the AI sector and creative industries to drive innovation, investment, and prosperity across the country, driving forward the U.K. government's mission to deliver the highest sustained growth in the G7." And it highlighted that a "combined approach" is "designed to strengthen trust between the two sectors, which are increasingly interlinked, clearing the way for developers to confidently build and deploy the next generation of AI applications in the U.K., in a way that ensures human creators and rights holders have a shared stake in AI's transformative potential." Currently, uncertainty about how copyright law applies to AI is holding back both sectors from reaching their full potential, the still-new Starmer government has said. After previous attempts to agree on a voluntary AI copyright code of practice proved unsuccessful, it emphasized on Tuesday that it "is determined to take proactive steps with our creative and AI sectors to deliver a workable solution." The consultation proposes the introduction of an exception to copyright law for AI training for commercial purposes while allowing rights holders to reserve their rights, so they can control the use of their content. Together with transparency requirements, the goal is to give them more certainty and control over how their content is used and support them in reaching licensing deals. In turn, AI developers could get greater certainty about what material they can and cannot use. The consultation also proposes new requirements for AI model developers to be more transparent about their model training datasets and how they are obtained. For example, AI developers could be required to provide more information about what content they have used to train their models. The goal is to help rights holders better understand when and how their content has been used in training AI. Plus, it looks to address issues "related to the protection of personality rights in the context of digital replicas, such as deepfake imitations of individuals," and will review whether the current legal frameworks are sufficient to tackle these issues. "This government firmly believes that our musicians, writers, artists and other creatives should have the ability to know and control how their content is used by AI firms and be able to seek licensing deals and fair payment," said U.K.  Secretary of State for Culture, Media and Sport Lisa Nandy. "Achieving this, and ensuring legal certainty, will help our creative and AI sectors grow and innovate together in partnership." Working with the creative and media industries and the AI sector will allow the government to "develop this clearer copyright system for the digital age and ensure that any system is workable and easy-to-use for businesses of all sizes," she concluded. Nandy recently also unveiled plans to broaden the scope of U.K. media merger laws, updating them "for the digital age to reflect modern news consumption habits and better protect media freedom and plurality." The current regulatory regime only covers television, radio and print publications. "The U.K. has an incredibly rich and diverse cultural sector and a groundbreaking tech sector which is pushing the boundaries of AI," added Secretary of State for Science, Innovation and Technology Peter Kyle on Tuesday. "It's clear that our current AI and copyright framework does not support either our creative industries or our AI sectors to compete on the global stage." He concluded: "This is all about partnership: balancing strong protections for creators while removing barriers to AI innovation; and working together across government and industry sectors to deliver this."
Share
Share
Copy Link
A coalition of UK creative industries, including publishers, musicians, and photographers, has strongly opposed the government's proposal to allow AI companies to train on copyrighted works without explicit permission. The debate centers on the balance between AI innovation and protecting creative rights.
The UK government has introduced a controversial proposal to allow artificial intelligence (AI) companies to train their models on copyrighted works without explicit permission from rights holders. This move, part of a broader effort to balance AI innovation with creative rights, has sparked significant backlash from the creative industries 12.
A coalition of creative industries, known as the Creative Rights in AI Coalition (CRAC), has firmly rejected the government's plan. This group includes major players such as:
The coalition argues that the proposed "opt-out" system would unfairly burden rights holders and potentially undermine existing copyright protections 2.
Technology and Culture Minister Chris Bryant defended the proposal, stating it would "improve access to content by AI developers, whilst allowing rights holders to control how their content is used for AI training" 4. The government fears that without such measures, international developers might train their models on UK content from overseas, potentially disadvantaging British AI developers and various economic sectors 1.
The creative sector contends that:
A recent UK poll revealed that:
Some tech industry groups, like Tech UK, have called for a "more open" market to enable firms to use copyrighted data and make payments 4.
This controversy is part of a broader global discussion on AI and copyright. Notable figures like Paul McCartney and Kate Bush have joined over 37,500 others in signing a petition against the unlicensed use of creative works for AI training 4.
As the UK government opens a 10-week consultation on the proposal, the debate continues to highlight the complex challenges in balancing technological innovation with the protection of creative rights in the AI era 14.
Reference
[2]
[3]
The UK government's new AI action plan, aimed at making Britain an AI superpower, faces backlash from artists and writers over proposed copyright reforms that could allow AI companies to use creative works without permission.
2 Sources
2 Sources
The UK government's proposed changes to copyright law for AI have ignited a fierce debate between tech companies and creative industries, raising concerns about intellectual property rights and the future of human creativity.
12 Sources
12 Sources
The UK government is reevaluating its proposed AI copyright reforms after facing strong opposition from prominent artists and creative industry figures. The debate centers on balancing AI innovation with protecting creators' rights.
3 Sources
3 Sources
UK trade unions call for urgent action to protect creative industry workers from exploitation by AI companies, demanding changes to proposed copyright laws and AI framework.
2 Sources
2 Sources
UKAI, the UK's AI trade body, rejects proposed copyright law changes and advocates for transparency, collaboration, and fair solutions between AI and creative industries.
2 Sources
2 Sources