The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 6 Mar, 12:06 AM UTC
2 Sources
[1]
UK AI - changing copyright won't work. So, what will?
The UK's creative communities have an unexpected ally in their fight against government plans to change copyright law: the UK's AI industry. As my previous report explained, UKAI - the national trade body for AI innovators - has rejected proposals to tear up copyright conventions and thus allow AI companies to train Large Language Models and generative AIs on proprietary data. Such a move would force copyright holders to opt out individually, a proposal that UKAI described as unworkable, divisive, damaging to Britain's creative sectors, and eroding of public trust. Little discernible gain would be won for the UK's AI UK start-ups by doing so, the organization said in a coruscating report, 'Creative Industries & AI. Conflict or Co-operation: Shining a Light on Solutions', published this week. Members reached that conclusion in a roundtable discussion with representatives from the creative industries, with their reasons set out in detail in my previous report. So, what will work? UKAI offers a range of ideas in its 41-page analysis, which I urge you to read. However, step one would involve passing the buck back to the AI industry at its highest level. That means asking CEOs to do the hardest thing in a market where mystic leaders can talk up their share prices to millions of followers - on their own platforms. Namely: Be Transparent. The report says: Attendees agreed that transparency is of fundamental importance. And much of the current mistrust is because the processes by which Gen-AI companies train their models are opaque and secret, in order to protect their own Intellectual Property. Irony is piled on irony, it seems; AI companies want everyone else's IP, but have no plans to disclose their own. The report continues: Rights owners want to understand: 1. What materials have been used for training data so far? 2. How is data used within the training process? Transparency is an important principle to establish to ensure that all future discussions and negotiations are informed by actual objective data. Revealing both the scale of usage and the frequency, as well as highlighting what type of materials (and which creators) are used most. Many in the creative industries believe that until AI developers disclose what data is being used, licencing discussions are meaningless. Speaking under Chatham House rules, one delegate explained: There isn't really a discussion to have on attribution unless we have knowledge of what works are being ingested. The roundtable urged a new policy of Transparency by Default. This was backed by broad agreement from all sides of the debate - including (one suspects) UK firms who are as frustrated with Big Techs' arrogance, social influence, and power as the creative sector is. UKAI suggests creating publicly accessible records of all the datasets used for training Gen-AI models. But won't Big Tech object to that? Of course. But the report notes: Gen-AI companies are concerned that revealing which data sources they use would undermine their intellectual property. However, the intellectual property that the Gen-AI company creates should exist in the process between the input (the training data) and the output. The data sources that any Gen-AI company include should not form part of their own intellectual property (ie. as a 'trade secret') and should not therefore require any unique protection. That would be a useful hedge against the inevitable power-grab that will occur when the majority of online content becomes AI-generated slop - Europol estimates this will happen as early as next year. In that world, many websites will shut down, because LLMs will have rendered them useless, and AI vendors will thus become the default mediators between humanity and its own digitised content. This is the intended endgame for Big Tech's mega-wealthy CEOs. That scenario is not idle speculation. According to Forbes, a report published this week by content licensing platform TollBit finds that AI search engines refer 96% less traffic to news sites and blogs than traditional Google searches. Viewed in that apocalyptic light, Big Tech's determination to disregard the creative sectors' IP seems like a play to own exclusive access to humanity's information - and, in effect, own the information itself. Copyright is thus the only possible defence; no wonder Big Tech is leaning on the British Prime Minister. The UKAI report adds: UKAI believes that being open, responsible, and ethical will emerge as an important competitive advantage for Gen-AI companies. On the other hand, as non-Western competitors emerge, such as DeepSeek, it will make the established players more wary and protective of their core intellectual property. This is an optimistic but, sadly, tenuous point. Trump v2.0 has seen the US tear up its commitment to safe AI development, alongside diversity, equity, and inclusion programs, and even civil rights. Some AI vendors, it seems, are also engaged in what appears to be a power grab for government itself. They want to own the future. However, transparency would also help achieve UKAI's second aim: creating attribution and enforcement at scale, an ambition that one delegate observed is "science fiction right now". So, the question then becomes whether it is possible to observe how generative AI models use copyrighted materials at scale, similar to the ways in which digital rights management systems track music streams, says UKAI. The report adds: This context is more challenging because of the lack of transparency that has already been noted. The process of ingesting training data is not observable and relies upon the Gen-AI company being willing to share this information. As we know, some of the biggest AI vendors are currently being dragged through US courts for copyright infringement - with the highest-profile case to date, involving Thompson Reuters, finding for the plaintiff. Unfortunately, this adds to vendors' reluctance to be transparent. From their perspective, they can only be more open about 'their' training data if the law is changed to say they did nothing wrong - which would be tantamount to an admission of historic guilt. So, copyright's defenders must hope that other US lawsuits finish soon and establish vendors' culpability. Again, no wonder Big Techs are leaning on the British government in the meantime! This is precisely why Starmer should not let them bounce him into changing the law: be bolder than that, Prime Minister. However, there are technical challenges too, notes the report: A set of training data might contain 10,000 music tracks. The value of the training set is as an entire set, not the individual track. The model is looking for patterns and variations across the entire set. It is therefore very hard if not impossible to attribute a value to one music track within the data set. This is a fair point. However, it ignores the fact that many users may wish to use AI to generate tracks in the style and 'voice' of a specific artist - and not just in music, but in any creative field - in which case their entire catalogue becomes the copyrighted asset. It also ignores the bigger picture: that every instance of someone using an AI to generate content - whether text, artwork, video, music, speech, performance, or analysis - becomes a low-cost, zero-effort digital competitor to a human creative; one that is based on their scraped wisdom and, in many cases, their entire life's work. But the report continues: If the Gen-AI model that has been based on this training set generates a new piece of music, to what degree can we attribute this new music track to the original training set? We could say that each of the 10,000 music tracks in the data set contributed 1/10,000th of the value, but this becomes unwieldy very quickly. Indeed, the report raises the possibility of creating vast databases of copyrighted material and metadata, then using AI itself to establish attribution. It adds: [So] instead of trying to calculate micropayments based upon usage, it might be preferable to negotiate a licence based on 'fair use'. In short, enforce existing laws and let copyright holders opt in to uses that are transparent and disclosed. Simply tearing up copyright conventions is a dumb, retrograde idea, one that solely benefits a handful of massive corporations - none of which are based in the UK, and all of which have already scraped copyrighted data to train their systems. In the real world, therefore, the sole purpose of changing copyright laws is to legitimize theft retrospectively. In itself, a good reason to leave the laws untouched. In summary, it's fair to say that 2025 has seen Big Tech overplay its hand spectacularly. The geek shall inherit the Earth, say AI's multibillionaires, with unelected Elon Musk coming on more like Dark Timmy Mallet - a cap-wearing children's TV star from the 1990s - than the Hammer of Justice he believes himself to be. But he is not alone: a phalanx of his peers stood at President Trump's shoulder as he tore up safe AI programs. This has created an 'us and them' environment, a sense that we are living in deeply unequal times. And if nothing else, the British psyche resents seeing the underdog - our beloved creative sector - being kicked around. There is no doubt that, on the international stage, Prime Minister Starmer is doing a respectable job of being the fulcrum between the US and its (erstwhile?) allies in Europe, on issues such as Ukraine. But being the balancing block between powers surely benefits the powerful on either side, not the block. On AI, however, Starmer seems to be favouring one side over the other, despite his claim to want balance. But even the side he is backing - the UK's AI industry - doesn't agree with him on copyright. So, will he climb down and scrap his plan to tear up the UK's convention? With UKAI, the AI industry's own body, saying his proposal is wrong - not to mention every broadsheet newspaper and national treasures like Sir Paul McCartney and Kate Bush - that choice ought to be easy. But there is a problem: beneath the competent, decent, 'Mr Nice Guy' image, Starmer is known to be aggressive and hyper-competitive. Climbing down is not in his lexicon - even if the British industry he claims to be supporting disagrees with him. So, what we can say is this: if Starmer ploughs on with his plan regardless, despite UKAI's opposition, we can infer that another actor is waiting in the wings - almost certainly a Big Tech promising inward investment on data centres and jobs. Put simply, if Starmer ignores UKAI's advice, then we will know that his stated purpose - to help AI start-ups - was just hot air. And so far, alas, the signs are not good for those of us who oppose the government's copyright proposals. Barely 24 hours after the Copyright Consultation finished last week, Labour figures were already tweeting their "So, this is where we stand on AI..." opinions. They can't possibly have read the Consultation's responses, which run to thousands of pages, so their comments were clearly pre-prepared. There is every chance the Consultation was a whitewash, therefore. We do know this, however: UKAI thinks your proposal is a bad idea, Prime Minister: destructive, divisive, and unworkable.
[2]
'Abandon proposed changes to copyright law!', urges UK AI industry
As messages to government go, this one is stark and to the point: Existing copyright law is adequate and should not be changed to an opt-out model. The proposed opt-out would significantly harm the creative sectors to achieve a minimal gain for a small group of global tech companies. Were these the words of some dyed-in-the-wool critic of the British government, or a lone campaigner for long-suffering artists? No, this is a statement from UKAI, the trade body of Britain's own AI sector. This is nothing less than the view of the very companies that Prime Minister Sir Keir Starmer claims he is backing in his bid to scrap the commercial copyright exception, thus allowing vendors to use proprietary data with impunity. UKAI is against that plan. It says: Changing the copyright law will erode public trust in the nascent AI sector and create unnecessary conflict. In the long term, this will undermine economic growth more than any benefits the opt-out could deliver. Strong words. It adds: UKAI champions a responsible AI industry that partners with other sectors to drive UK economic growth. The organization advocates co-operation with the creative industry to find fair solutions through constructive dialogue and collaboration, arguing that the proposed opt-out model leads to a zero-sum outcome. UKAI reached that view in a constructive dialogue with UK creatives. On Valentine's Day, 14 February, the organization hosted a love-in of sorts in the capital - a roundtable between AI vendors and senior representatives of Britain's creative industries. This week, it published a detailed, 41-page report based on those discussions: 'Creative Industries & AI. Conflict or Co-operation: Shining a Light on Solutions'. Publication came in the wake of the government's Consultation on its proposed change to UK copyright law, which UKAI notes: [would] make it the responsibility of the copyright owner to opt-out and withhold their consent for their copyrighted materials to be used. The default position [would therefore become] that all materials can be used by Gen-AI companies unless the copyright owner opts-out. That Consultation closed on 25 February - a day that saw the front page of every national newspaper taken over by the 'Make it fAIr' campaign against the proposals. Not to mention the release of an extraordinary music-free album, 'Is This What We Want?', by over 1,000 musicians, including Kate Bush, Sir Paul McCartney, Damon Albarn, and Ed Sheeran. It contained just the sounds of empty recording studios. If nothing else, then, Prime Minister Sir Keir Starmer has achieved the extraordinary feat of uniting every broadsheet in the country against him - and even the likes of Jimmy Page and Sir Simon Rattle. So, what else does the UKAI report say? Among other things, it calls for further collaboration between the creative and AI sectors; the investigation of new solutions for licensing and rights management; and a full assessment of the economic impact of changing UK copyright law (why the Prime Minister didn't do that himself is a mystery.) Held under Chatham House rules - in which discussions can be quoted, but speakers not identified - one delegate said: I totally understand why the government has proposed it, [but] I see it as a political compromise that will genuinely achieve nothing for this country, nothing for rights holders, and nothing for Gen-AI companies. Another added: Copyright has always been a commercial right, not a moral right. A third noted: The law is clear on this subject and the problem is just adherence with that copyright law. That certainly chimes with the findings of the House of Lords Communications and Digital Committee's 2024 report on Large Language Models, which urged No 10 to uphold copyright conventions, not undermine them. Writing to the government last year, Committee Chair Baroness Stowell said: The government's record on copyright is inadequate and deteriorating. [...] The government's reticence to take meaningful action amounts to a de facto endorsement of tech firms' practices. [...] That reflects poorly on this government's commitment to British businesses, fair play, and the equal application of the law. Indeed, one could argue that the government's policy of sitting on its hands is one reason for AI companies continuing to seize creators' work without consent, credit, or payment. However, UKAI acknowledged that there was still some disagreement among delegates at last month's roundtable. For example, one said: Just because copyright has historically been the tool used doesn't mean it is the tool for the future. Fair point. However, another identified a flaw in the government's proposal: If all creatives' work is used without consent, then licensing talks are meaningless. This should be obvious, and it is no minor concern. There are ongoing negotiations worldwide between large publishers, media platforms, news organizations, film studios and AI vendors about licensing proprietary content for training data. So, changing copyright law - in the country that developed copyright - would make a nonsense of those discussions. On that point, with some Big Techs talking about spending up to half a trillion dollars just to create a US hardware infrastructure for OpenAI (the Stargate program), vendors' reluctance to spend even a dime on training data is, frankly, sickening. As one delegate put it: With the large AI models, the engineers that developed them were paid, they paid their electricity bills, and they bought their GPUs, so why would they not pay for the content? UKAI notes: Attendees who supported the existing copyright system highlighted that this approach protects creators' rights by default. The burden of responsibility for obtaining consent falls upon the party who wants to access the copyrighted materials, rather than the rights holder. This means that the creator starts from a strong negotiating position, which is important when there is a significant imbalance in power between individual creators and global tech companies. The 'opt-in' route aligns with existing copyright laws that use well-established commercial licensing agreements. This is an excellent point: why should the burden of trillion-dollar corporations' actions be placed on writers, artists, and musicians? And it underlines another point: while media giants may be able to sit at the negotiating table with US giants to hammer out licensing deals, many artists are self-employed individuals for whom copyright is their only defence. One delegate explained: [Under the government's proposals] the vast majority of rights holders won't be able to enforce their rights, even if they know they've been encroached. The economic model in this jurisdiction means an individual rights holder is going to get nothing that would justify taking action. Indeed, this is why the proposals, at their core, are misconceived, naïve, and unworkable. To pursue them would demand a government - and a Labour one at that - batting for the super-powerful against the powerless. That's not a good look for an unpopular administration. To its credit, however, UKAI says the quiet part out loud about why some of its members still want the government to tear up the copyright rules: Those who argued in favour of an opt-out system pointed to the fact that the current process is unwieldy for generative AI companies, who want to find the easiest and cheapest way to obtain the training data that they require. There it is: it would be easy and cheap. So, it is primarily about cost and offloading the inconvenience onto artists. While this might make sense for smaller AI players, perhaps, the argument is being pushed hardest by US Big Techs that have the ear of the Prime Minister. And some of those are worth more than the entire British economy. In that light, cost just doesn't play well as a driver. More, the reality is that, in the case of Large Language Models and generative AIs, training has already taken place on data scraped from the pre-2023 Web. Thus, the real effect of the government's proposals would be to retrospectively legitimize industrial-scale theft. Again, that looks bad. A cynic might also observe that, if a UK start-up can only flourish by scraping creatives' copyrighted work without consent, credit, or payment, then why on Earth should that company succeed? Most important, perhaps, the government's proposal is nonsensical in the real world, observed delegates. One said: The TDM [Text & Data Mining] opt-out is a non-starter. Even if the 'opt-out' is respected at the website level, what happens to content used downstream? This is a critical point. As I have noted in previous reports, copyright is creatives' only defence against others placing proprietary content in public domains. So, while an artist may opt out of AI training themselves, persons unknown may opt them back in without their knowledge - in hundreds of countries worldwide. The government's proposals remove that defence, therefore, leaving artists and other content providers powerless against AI companies and their automated, generative competitors. An opt-out would be like trying to stop a charging elephant with a feather. If the government goes ahead with its proposals, therefore, it will be against the advice of the sector it claims to be supporting. Meanwhile, copyright itself would cease to have any meaning or utility in an AI world, thus handing the balance of power to tech titans - for no obvious benefit to the UK. So, it is welcome to hear - from the UK AI industry's own trade body - that tearing up copyright conventions is a non-starter.
Share
Share
Copy Link
UKAI, the UK's AI trade body, rejects proposed copyright law changes and advocates for transparency, collaboration, and fair solutions between AI and creative industries.
In a surprising turn of events, the UK's AI industry has emerged as an unexpected ally for the creative communities in their fight against proposed changes to copyright law. UKAI, the national trade body for AI innovators, has rejected the government's plans to alter copyright conventions, which would allow AI companies to train Large Language Models and generative AIs on proprietary data without explicit permission 1.
UKAI's report, titled "Creative Industries & AI. Conflict or Co-operation: Shining a Light on Solutions," strongly opposes the proposed opt-out model for copyright. The organization argues that changing the copyright law would "significantly harm the creative sectors to achieve a minimal gain for a small group of global tech companies" 2. UKAI believes that such a move would erode public trust in the nascent AI sector and create unnecessary conflict.
One of the key recommendations from UKAI is the need for transparency in AI training processes. The report emphasizes that much of the current mistrust stems from the opaque nature of how generative AI companies train their models. UKAI suggests creating publicly accessible records of all datasets used for training Gen-AI models, arguing that data sources should not form part of AI companies' intellectual property 1.
UKAI advocates for cooperation between the AI industry and creative sectors to find fair solutions through constructive dialogue. The organization hosted a roundtable discussion between AI vendors and senior representatives of Britain's creative industries, which formed the basis of their report 2.
The report calls for a full assessment of the economic impact of changing UK copyright law. This comes in the wake of significant public opposition to the proposed changes, including a unified front-page campaign by national newspapers and a protest album released by over 1,000 musicians 2.
The report also highlights concerns about the potential dominance of AI-generated content. A study by TollBit found that AI search engines refer 96% less traffic to news sites and blogs than traditional Google searches, raising questions about the future of online content creation and distribution 1.
While some debate exists about whether copyright is the right tool for the future, UKAI and many in the creative industries argue that it remains a crucial protection for creators' rights. The organization emphasizes the need for fair compensation, noting that while AI companies pay for other resources, they seem reluctant to pay for content used in training their models 2.
As the debate continues, UKAI's stance represents a significant voice in shaping the future relationship between AI development and creative industries in the UK.
Reference
[1]
OpenAI proposes relaxing copyright laws to train AI models, sparking debate over intellectual property rights and AI development in the US.
5 Sources
5 Sources
The UK government's new AI action plan, aimed at making Britain an AI superpower, faces backlash from artists and writers over proposed copyright reforms that could allow AI companies to use creative works without permission.
2 Sources
2 Sources
The US and UK are navigating complex AI regulatory landscapes, with the US imposing new export controls and the UK seeking a middle ground between US and EU approaches.
2 Sources
2 Sources
The UK government's proposed changes to copyright law for AI have ignited a fierce debate between tech companies and creative industries, raising concerns about intellectual property rights and the future of human creativity.
12 Sources
12 Sources
UK Labour Party unveils AI regulation plans while UiPath partners with academia for AI innovation. The stories highlight the balance between AI advancement and responsible development.
2 Sources
2 Sources