5 Sources
5 Sources
[1]
Ex-Meta exec: Copyright consent obligation = end of AI biz
Nick Clegg, former politico and Zuckcorp policy Prez, seems confused, can Reg readers help him? Former British deputy PM and Meta apologist Sir Nick Clegg says that forcing AI companies to ask for the permission of copyright holders before using their content would destroy the AI industry overnight. Clegg, who served as deputy to David Cameron in a Conservative / Lib Dem coalition that governed the UK between 2010 and 2015 before moving to Zuckcorp as president of global policy affairs, told the audience at a literary festival that demands to make tech firms seek consent from creators before using copyrighted material to train AI models were unworkable. Any such laws would "basically kill the AI industry in this country overnight," Clegg claimed, according to The Times. This month, members of the House of Lords, the UK's upper chamber of Parliament, voted in in favor of amendments to the proposed Data (Use and Access) Bill that would have protected copyrighted work from simply being copied by AI companies. However, government ministers used an arcane parliamentary procedure to block the amendment, which would have required tech firms to reveal what copyright material has been used to train their models. Clegg stepped down from his role as president of global affairs at Facebook parent company Meta at the start of this year. He was speaking at the Charleston Festival in East Sussex in order to plug a book he has coming out, How to Save the Internet: the Threat to Global Connection in the Age of AI and Political Conflict. The former politician seemed confused over the issue of AI and copyright, agreeing when questioned that people ought to be able to opt out of having their work copied and used for model training. But he then reportedly said that "quite a lot of voices say 'you can only train on my content, [if you] first ask.' And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data." He added: "I just don't know how you go around, asking everyone first. I just don't see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight." So which is it, Mr Clegg? Do creators have the right to opt out or not? Because asking their permission after the fact is self-defeating. And admitting that the AI business model is dead unless LLM trainers are allowed to break the law doesn't sound like much of an argument. That Clegg should side with the UK government and big business interests is probably not surprising considering his background. The Tony Blair Institute, founded by the former Prime Minister, also came out in favor of exceptions to copyright rules for developers training AI models. This is despite many of the UK's leading media and arts professionals speaking out against the data access bill, including playwright Tom Stoppard, Dr Who producer Russell T Davies, and a slew of musicians such as Elton John, Paul McCartney, Kate Bush and Robbie Williams. The big AI companies haven't been waiting for permission so far, with a recent study finding that OpenAI mined copyright-protected content in order to train its GPT family of models, for example. Baroness Kidron, who proposed the Lords amendments, said: "How AI is developed and who it benefits are two of the most important questions of our time." She warned the UK creative industries "must not be sacrificed to the interests of a handful of US tech companies." The UK government, for one, has made AI a central plank of its plans for economic revival of the country, as detailed in the AI Opportunities Action Plan published earlier this year. This includes setting up "AI Growth Zones" with streamlined planning processes that allow developers to override both local authorities and the concerns of local communities when siting massive new AI datacenters. There are alternatives: last month, we reported on a new licensing model that aims to let developers of large language models (LLMs) use copyrighted training data while paying the publishers for the privilege. It isn't only the UK where formerly sacrosanct copyright protection is being shredded in favor of AI developers: every nation fears being left behind in some kind of tech arms race. Just recently, it was reported that the head of the US Copyright Office was sacked, just after the agency concluded that AI developers' use of copyrighted material went beyond existing doctrines of fair use. ®
[2]
AI will die "overnight" if copyright permission is...
Former UK Deputy Prime Minister Nick Clegg says artificial intelligence companies shouldn't need to seek permission every time they use copyright-protected data. Speaking at a recent event to promote his book, "How to Save the Internet," Clegg - who previously served as a Meta executive - sided with the AI industry on the issue. Forcing technology firms to comply with copyright laws - and notify rights holders when they use protected content to train artificial intelligence models - would kill the UK's AI industry "overnight," Clegg warned. The content is already publicly available, he argued, and AI systems need vast amounts of data to improve their reasoning. Clegg argues that current copyright laws are incompatible with artificial intelligence, as requiring companies to obtain permission every time they train a model would render the entire technology unworkable. He believes artists and rights holders should be able to opt out of data scraping for AI training, but seeking individual confirmations isn't a viable solution. "I think people should have clear, easy to use ways of saying, no, I don't. I want out of this," the former Meta VP said. "But I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training - I just don't see. I'm afraid that just collides with the physics of the technology itself." Clegg is focusing on the UK AI industry as politicians debate the new Data (Use and Access) Bill, which aims to regulate access to customer and company data. A coalition of artists and authors, led by film director Beeban Kidron, pushed to amend the law, requiring AI companies to disclose the data they use to train their models. However, parliament rejected the proposal. In a recent op-ed in The Guardian, Kidron accused the government of essentially approving a plan to facilitate mass cultural theft. She said UK authorities are allowing AI companies to use copyrighted works freely while opting out of such practices would be impossible without proper transparency. The government can certainly "bully its way to victory" and pass the bill by majority vote, but doing so would deal a catastrophic blow to Britain's creative industry. However, the fight isn't over. The draft will return to the House of Lords for a new vote on June 2. Permalink to story:
[3]
The AI copyright standoff continues - with no solution in sight
This is "unchartered territory", one source in the peers' camp told me. The argument is over how best to balance the demands of two huge industries: the tech and creative sectors. More specifically, it's about the fairest way to allow AI developers access to creative content in order to make better AI tools - without undermining the livelihoods of the people who make that content in the first place. What's sparked it is the uninspiringly-titled Data (Use and Access) Bill. This proposed legislation was broadly expected to finish its long journey through parliament this week and sail off into the law books. Instead, it is currently stuck in limbo, ping-ponging between the House of Lords and the House of Commons. The bill states that AI developers should have access to all content unless its individual owners choose to opt out. Nearly 300 members of the House of Lords disagree. They think AI firms should be forced to disclose which copyrighted material they use to train their tools, with a view to licensing it. Sir Nick Clegg, former president of global affairs at Meta, is among those broadly supportive of the bill, arguing that asking permission from all copyright holders would "kill the AI industry in this country". Those against include Baroness Beeban Kidron, a crossbench peer and former film director, best known for making films such as Bridget Jones: The Edge of Reason. She says ministers would be "knowingly throwing UK designers, artists, authors, musicians, media and nascent AI companies under the bus" if they don't move to protect their output from what she describes as "state sanctioned theft" from a UK industry worth £124bn. She's asking for an amendment to the bill which includes Technology Secretary Peter Kyle giving a report to the House of Commons about the impact of the new law on the creative industries, three months after it comes into force, if it doesn't change.
[4]
Former Meta executive claims forcing AI companies to ask for copyright permissions will kill UK's AI industry "overnight"
Editor's take: The UK Parliament is debating the Data (Use and Access) Bill, a law set to regulate access to user and customer data. The bill could have a dramatic impact on the IT sector, particularly AI companies that aggressively collect vast amounts of human-generated data online to train their often unpredictable chatbots. Former UK Deputy Prime Minister Nick Clegg says artificial intelligence companies shouldn't need to seek permission every time they use copyright-protected data. Speaking at a recent event to promote his book, "How to Save the Internet," Clegg - who previously served as a Meta executive - sided with the AI industry on the issue. Forcing technology firms to comply with copyright laws - and notify rights holders when they use protected content to train artificial intelligence models - would kill the UK's AI industry "overnight," Clegg warned. The content is already publicly available, he argued, and AI systems need vast amounts of data to improve their reasoning. Clegg argues that current copyright laws are incompatible with artificial intelligence, as requiring companies to obtain permission every time they train a model would render the entire technology unworkable. He believes artists and rights holders should be able to opt out of data scraping for AI training, but seeking individual confirmations isn't a viable solution. "I think people should have clear, easy to use ways of saying, no, I don't. I want out of this," the former Meta VP said. "But I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training - I just don't see. I'm afraid that just collides with the physics of the technology itself." Clegg is focusing on the UK AI industry as politicians debate the new Data (Use and Access) Bill, which aims to regulate access to customer and company data. A coalition of artists and authors, led by film director Beeban Kidron, pushed to amend the law, requiring AI companies to disclose the data they use to train their models. However, parliament rejected the proposal. In a recent op-ed in The Guardian, Kidron accused the government of essentially approving a plan to facilitate mass cultural theft. She said UK authorities are allowing AI companies to use copyrighted works freely while opting out of such practices would be impossible without proper transparency. The government can certainly "bully its way to victory" and pass the bill by majority vote, but doing so would deal a catastrophic blow to Britain's creative industry. However, the fight isn't over. The draft will return to the House of Lords for a new vote on June 2.
[5]
Legendary Facebook Exec Scoffs, Says AI Could Never Be Profitable If Tech Companies Had to Ask for Artists' Consent to Ingest Their Work
Fresh on the heels from his exit from Meta, former Facebook executive Nick Clegg is defending artificial intelligence against copyright holders who want to hold the industry accountable. As the Times of London reports, Clegg insisted during an arts festival last weekend that it's "implausible" to ask tech companies to ask for consent from creators before using their work to train their AI models. During a speech at the Charleston Festival in East Sussex -- which was, ironically enough, meant to promote his new book titled "How To Save The Internet" -- Meta's former global affairs vice president initially said that it was "not unreasonable" that artists may want to "opt out of having their creativity, their products, what they've worked on indefinitely modeled." But he then went on to suggest that those same artists are getting greedy. "I think the creative community wants to go a step further," Clegg then charged. "Quite a lot of voices say 'you can only train on my content, [if you] first ask.' And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data." "I just don't know how you go around, asking everyone first," Clegg said during a speech to promote his new book, ironically titled "How to Save The Internet," that took place at this year's Charleston Festival in East Sussex, England. "I just don't see how that would work." The former deputy prime minister then added that if AI companies were required only in Britain to gain permission to use copyright holders' works, "you would basically kill the AI industry in this country overnight." Clegg's comments came amid a fiery debate in England about AI and copyright, spurred on by a recent Parliament vote on an amendment to the UK government's data bill, which would have required companies to tell copyright holders when their work was used had it not been struck down in the House of Commons last week. His stance also put him in opposition to Paul McCartney, Elton John, Dua Lipa, and hundreds of other artists who called on the British government to "protect copyright in the age of AI," as Sir Elton put it in an Instagram post. Unfortunately, it seems that Parliament's lower house agreed with Clegg's sentiments and not the artists' -- but history will show who was on which side of the AI wars.
Share
Share
Copy Link
Nick Clegg, ex-Meta executive, argues that requiring AI companies to seek copyright permission would be detrimental to the industry, sparking debate on balancing AI development with creative rights.
Former Meta executive and British politician Nick Clegg has ignited a fierce debate over artificial intelligence (AI) and copyright laws. Speaking at the Charleston Festival in East Sussex, Clegg argued that forcing AI companies to seek permission from copyright holders before using their content for training would be detrimental to the industry
1
.Source: Futurism
Clegg, who was promoting his book "How to Save the Internet," stated that such requirements would "basically kill the AI industry in this country overnight"
2
. He emphasized the impracticality of obtaining individual permissions, given the vast amounts of data used in AI training.The debate is centered around the UK's proposed Data (Use and Access) Bill, which aims to regulate access to customer and company data. The bill has become a battleground between the tech and creative sectors
3
.Key points of contention include:
Source: BBC
The creative community has rallied against the bill's current form. Baroness Beeban Kidron, a crossbench peer and former film director, accused the government of facilitating "state-sanctioned theft" from a UK industry worth £124 billion
3
.Notable figures opposing the bill include:
1
These artists have called on the British government to "protect copyright in the age of AI"
5
.Related Stories
Clegg and supporters of the bill argue that current copyright laws are incompatible with AI technology. They contend that seeking individual permissions for every piece of training data is unfeasible and would hinder AI development
4
.The AI industry's stance includes:
The controversy extends beyond the UK, reflecting a global struggle to balance AI innovation with copyright protection. In the US, similar tensions have arisen, with the recent dismissal of the head of the US Copyright Office following the agency's conclusion that AI developers' use of copyrighted material exceeded existing fair use doctrines
1
.As the debate continues, alternative solutions are being explored. One such proposal is a new licensing model that would allow LLM developers to use copyrighted training data while compensating publishers
1
.The Data (Use and Access) Bill is set to return to the House of Lords for another vote on June 2, keeping the AI copyright standoff in the spotlight and highlighting the complex challenges of regulating emerging technologies in the digital age
4
.Summarized by
Navi
[1]
10 May 2025•Policy and Regulation
20 May 2025•Policy and Regulation
25 Feb 2025•Policy and Regulation
1
Business and Economy
2
Technology
3
Business and Economy