Curated by THEOUTPOST
On Fri, 14 Mar, 4:02 PM UTC
5 Sources
[1]
OpenAI requests US government legalize theft in order to reach AI promised land
OpenAI proposes freedom of intelligence in United States. Image source: ChatGPT OpenAI may have already crawled the internet for all the world's data to train ChatGPT, but it seems that isn't enough as it wants protection from copyright holders to allow it to continue stealing everything that both is and isn't nailed down. The latest manufactured goalpost OpenAI has set dubbed "AGI" can't be reached unless the company is free to do whatever it takes to steal everything good and turn it into AI slop. At least, that's what the latest OpenAI submission to the United States Government suggests. An OpenAI proposal submitted regarding President Trump's executive order on sustaining America's global dominance explains how the administration's boogeyman might overtake the US in the AI race. It seems that the alleged theft of copyright material by Chinese-based LLMs put them at an advantage because OpenAI has to follow the law. The proposal seems to be that OpenAI and the federal government go into a kind of partnership that enables OpenAI to avoid any state-mandated laws. Otherwise, the proposal alleges that the United States will lose the AI race. And to prove it's point, OpenAI says will lose that race to the US government's favorite boogeyman -- China. Its ability to benefit from copyright arbitrage being created by democratic nations that do not clearly protect AI training by statute, like the US, or that reduce the amount of training data through an opt-out regime for copyright holders, like the EU. The PRC is unlikely to respect the IP regimes of any of such nations for the training of its AI systems, but already likely has access to all the same data, putting American AI labs at a comparative disadvantage while gaining little in the way of protections for the original IP creators. Such a partnership would protect OpenAI from the 781 and counting AI-related bills proposed at the state level. AI companies could volunteer for such a partnership to seek exemption from the state laws. The proposal also suggests that China and countries like it that don't align with democratic values be cut off from AI built in the United States. This would mean Apple's work with Alibaba to bring Apple Intelligence to China would be halted if required. It also calls for a total ban on using equipment produced in China in goods that would be sold to Americans or used by American AI companies. The section on copyright more or less calls for the total abandonment of any restriction for access to information. According to the proposal, copyright owners shouldn't be allowed to opt out of having its content stolen to train AI. It even suggests that the US should step in and address restrictions placed by other places like the EU. The next proposal centers around infrastructure. OpenAI wants to create government incentives (that would benefit them) to build in the US. Plus, OpenAI wants to digitize all the government information that is currently still in analog form. Otherwise, it couldn't crawl it to train ChatGPT. Finally, the proposal suggests the US government needs to implement AI across the board. This includes national security tasks and classified nuclear tasks. One of the US Navy nuclear trained staffers here at AppleInsider pointed this out to me, cackling as he did so. As the other half of the nuclear-trained personnel on staff, I had to join him in laughing. It's just not possible for so many reasons. Admiral Hyman G. Rickover, the father of the nuclear Navy, helped build not just the mechanical and electrical systems we still use today, but the policies and procedures as well. One of his most important mantras besides continuous training was that everything needed to be done by humans. Automation of engineering tasks is one of the reasons the Soviets lost about a submarine a year to accidents during the height of the Cold War. When you remove humans from a system, you start to remove the chain of accountability. And the government must function with accountability, especially when dealing with nuclear power or arms. That aside, there are an incredible number of inconsistencies with the proposals laid out by OpenAI. Using China as a boogeyman only to propose building the United States policy on AI around China's is hypocritical and dangerous. OpenAI has yet to actually explain how AI will shape our future beyond sharing outlandish concepts from science fiction about possible outcomes. The company isn't building a sentient, thinking computer, it won't replace the workforce, and it isn't going to fundamentally transform society. It's a really nice hammer, but that's about it. Humans need tools -- they make things easier, but we can't pretend these tools are replacements for humans. Yes, of course, the innovations created around AI, the increased efficiency of some systems, and the inevitable advancement of technology will render some jobs obsolete. However, that's not the same as the dystopian promise OpenAI keeps espousing of ending the need for work. Read between the lines of this proposal, and it says something more like this: OpenAI got caught by surprise when DeepSeek released a model that was much more efficient and undermined its previous claims. So, with a new America-first administration, OpenAI is hoping it can convince regulators to ignore laws in the name of American exceptionalism. The document references that authoritarian regimes that allow DeepSeek to ignore laws will enable it to get ahead. So, OpenAI needs the United States to act like an authoritarian regime and ensure it can compete without laws getting in the way. Of course, the proposal is filled with the usual self-importance evoked by OpenAI. It seems to believe its own nonsense about where this so-called "artificial intelligence" technology will take us. It had to adjust goal posts to suggest that the term "AI" wasn't the sentient computer it promised us. No, now we've got two other industry terms to target: Artificial General Intelligence and Artificial Superintelligence. To be clear, none of this is actual intelligence. Your devices aren't "thinking" any more than a calculator is. It is just a much better evolution of what we had before. Computers used to be more binary. A given input would give a predetermined output. Then, branching allowed more outputs to occur for a given input depending on conditions. That expanded until we got to the modern definition of machine learning. That technology is still fairly deterministic, meaning you expect to get the same output for the given inputs. The next step past that was generative technology. It uses even bigger data sets than ML, and outputs are not deterministic. Algorithms attempt to predict what the output should be based on patterns in the data. That's why we still sarcastically refer to AI as fancy autocomplete. Generating text just predicts what the next letter is most likely to be. Generating images or video does the same, but with pixels or frames. The "reasoning" models don't reason. In fact, they can't reason. They're just finer tuned to cover a specific case that makes them better at that task. OpenAI expects AGI to be the next frontier, which would be a model that surpasses human cognitive capabilities. It's this model that OpenAI threatens will cause global economic upheaval as people are replaced with AI. Realistically, as the technology is being developed today, it's not possible. Sure, OpenAI might release something and call it AGI, but it won't be what they promised. And you can forget about building a sentient machine with the current technology. That's not going to stop OpenAI from burning the foundation of the internet down in the pursuit of the almighty dollar. Meanwhile, everyone says Apple is woefully behind in the AI race. That since Apple isn't talking about sentient computers and the downfall of democracies, it's a big loser in the space. Even if OpenAI succeeds in getting some kind of government partnership for AI companies, it is doubtful Apple would participate. The company isn't suffering from a lack of data access and has likely scraped all it needed from the public web. Apple worked with copyright holders and paid for content where it couldn't get it from the open web. So, it is an example of OpenAI's arguments falling flat. While OpenAI tries and fails to create something beyond the slop generators we have today, Apple will continue to refine Apple Intelligence. The private, secure, and on-device models are likely the future of this space where most users don't even know what AI is for.
[2]
Something for the weekend - generative AI is data laundering. J'accuse!
While the AI Spring ought to be blossoming into AI Summer, all is not rosy in the sector's garden. While bold pronouncements of incoming Artificial General Intelligence (AGI) or even Artificial Super Intelligence (ASI) abound - and OpenAI notches up 400 million users of ChatGPT - the chorus of critics and naysayers grows louder. Generative AI is just data laundering, is the allegation - an accusation I can frankly find no fault with, because it is a simple statement of fact rather than some Luddite expression of fear. I will explain why in a moment. So, while certain industry CEOs may stand shoulder to shoulder with the US President as he rolls back regulations, diversity, and safe AI schemes, not everything is going Big Tech's way. Now, every mystic pronouncement from the likes of OpenAI CEO Sam Altman and Anthropic chief Dario Amodei is counter-balanced by claims - including from within the industry - that their words are hot air to keep the AI bubble inflating. And perhaps to keep some politicians, desperate for growth, in their thrall. That task was made more difficult earlier this year by the arrival of China's DeepSeek, which revealed the absurd extent of US gold-digging for infrastructure investment. Half a trillion dollars for OpenAI's Stargate program, anyone? But with some US vendors worth more than the GDPs of most nations on Earth, balance is a hard thing for politicians to strike. But at the heart of this growing disquiet is the vexed issue of copyright. Emboldened by Thomson Reuters' win in the US courts last month (see diginomica, passim), lawsuits against AI companies are rising, even as artists and traditional media join the #makeitfAIr campaign. While the Thomson Reuters case did not concern generative AI specifically, it established an important principle: a third party, Ross Intelligence (now defunct), breached copyright by scraping proprietary data from Thomson Reuters' Westlaw service to train an AI model. That precedent will now be cited by others. In France, The National Publishing Union (SNE), the National Union of Authors and Composers (SNAC), and the Society of Men of Letters (SGDL) are suing Meta for copyright infringement, alleging economic "parasitism" in the unlicensed use of proprietary work in training the Llama LLM family. In the US, a similar suit against the Facebook, Instagram, and WhatsApp owner was given the go-ahead by a Federal Judge last week, though he cautioned the plaintiffs against deploying such OTT rhetoric: their case would stand or fall on its own merits, he said. Authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates allege that not only did the social media giant scrape unlicensed work, but also removed copyright statements from it in an implicit admission of wrongdoing. Meanwhile, a combined lawsuit against OpenAI and Microsoft - three cases merged into one - by the New York Times, the New York Daily News, and the Center for Investigative Reporting has been ongoing since January. Over a dozen other lawsuits are also in progress. Vendors in the crosshairs include: NVIDIA, Cohere, Stability AI (sued by Getty Images), Anthropic, Google, and Perplexity, with the latter accused of copyright theft by Dow Jones and the New York Post. The sense that the real battle is between old media and new is hard to ignore: a fight for the future of real-time information as well as access to historic data. But that is not the whole story. One case, Doe vs GitHub, alleges that, along with OpenAI, the Microsoft-owned developer community breached open-source software licenses and violated the Digital Millennium Copyright Act (DMCA) to create the Codex and Copilot products. This under-reported suit could have far-reaching implications for software development, given Anthropic CEO Amodei's claim this month that AI will be able to do 100% of coding by the end of the year. No doubt it will be able to do 100% of other jobs too, but how it was trained to do so is the question, with a critical factor being the unknown human cost. On that note, I took part in a webinar last month in which an academic alleged that companies in many sectors have stopped employing junior staff and are now filling their ranks with experienced middle managers instead: AI as generational time-bomb. An early sign of this came a year ago in my interview with OpenText CEO Mark Barrenechea who said: I don't need to hire junior programmers anymore. [...] I'm stepping up the skill that's required to get into our company. And I think AI is going to force that across a lot of industries. So, what is a machine's job? That question needs urgent strategic answers. After all, debt-laden Millennials, Gen-Z, and Generation Alpha already face a world in which property is unaffordable, energy, food, and travel are at a premium, job security is non-existent, real-world security is in doubt, there are 342 tech billionaires (according to Forbes), UK food banks really do outnumber McDonald's branches (see my 2024 report on digital poverty [LINK]), and human creativity is being scraped and commoditized by trillion-dollar corporations. Put simply, are experienced professionals now pulling up the career ladder behind them, and using AI as a low-cost generalist workforce? And who trained those workers, and with what data? In that context, the battle over copyright may be far more important and wide-ranging than commentators realise; it is not just about artists being disintermediated by automated competitors trained on their work without consent, credit, or payment - important though that is. So, just how widespread is copyright infringement by the AI sector? According to the Rettigheds Alliancen - aka Denmark's Rights Alliance, which fights for creators' fair treatment online - it is so widespread as to be normal behaviour. The Alliance has taken the unusual step of producing a report that focuses on pirated content. As it explains, relying on pirated data allows AI companies to indulge in copyright theft at arm's length - in the apparent belief that stealing from socially conscious pirates is tantamount to good behaviour and fair use. In reality, of course, it is the opposite: scraping known pirated data undermines any claim a vendor might have that such content was copied accidentally. The 17-page document lists several vendors - Apple, Anthropic, DeepSeek, Meta, Microsoft, NVIDIA, OpenAI, Runway AI, and music platform Suno - and the pirated data sets that, according to the Alliance, they are known to have scraped. Unlicensed training sources cited in the report include: According to the Alliance, AI companies have also scraped (among others): The Pile dataset; academic platform ArXiv; Stack Exchange; Project Gutenberg; YouTube - content that is a mix of licensed and unlicensed data placed in a public domain; movie and TV streaming giant Netflix; and, ironically, Elon Musk's bugbear, Wikipedia. The report adds: In many cases, [AI companies] obtain data sets from user-generated platforms, such as HuggingFace or via downloading of torrent files without any interaction with the company providing the data set. The truly enraging element here is that some of these resources have been relied on by students and the economically disadvantaged for years as a means of accessing the world's learning and expertise for free. But AI companies - which include some of the richest and most valuable organizations on the planet - are now scraping that data, which costs them nothing, in order to sell it back to us for massive profit. Witness OpenAI's plan to charge $200 a month for Deep Research, with a reported strategy of charging up to $20,000 a month for specialized AI agents trained on that data. Now the rising number of lawsuits worldwide will establish whether such actions break the letter of the law, as well as betraying the community spirit behind many pirate and file-sharing sites. All that being so, it's clear to me that this data laundering is a reality, in the context of a clear strategy to own access to all the world's content and expertise, for a minimal investment in data on AI companies' part. For now the only thing standing in their way may be copyright lawsuits. But can enough legal pressure be piled onto vendors in the hope that the bubble bursts? Watch this space.
[3]
Monday Morning Moan - beware...well, pretty much everyone! Parsing OpenAI's Trump 2.0 appeal to MAAGA. (That's Make American AI Great Again)
If someone shows you who they are, you should believe them. That was a doctrine my late grandmother used to hold dear and as I've gotten older, it's one that's come to make more and more sense to me. Not that I'm suggesting that people shouldn't be given the benefit of the doubt, just that I recognise that sometimes some of them don't deserve it. It's the same rule of thumb with companies as with individuals. And my grandmother's maxim sprung unbidden to mind when I contemplated the latest outpourings from OpenAI. Now, I've openly questioned the direction of this firm and its management before now, but you can't fault it for sheer corporate brass neck - and in this case, that neck has been well and truly polished up to the max. Here at diginomica a lot has been written of late by Chris Middleton on the subject of AI and copyright, not least due to fears around what look all too like moves by the UK Government to cosy up in an inward investment-friendly sort of a way to certain tech vendors in their bid to water down current regulations, thus making it easier for them to train their own models on other people's IP and hard work. But the copyright abuse problem extends far beyond the UK's shores as OpenAI's naked pitch to the Trump 2.0 administration last week makes all too clear - MAGA (Make AI Great Again). That this can be achieved by letting us 'copy other people's homework' is my reading of the main thrust of a missive sent by Chris Lehane, OpenAI's VP of Global Affairs, to Faisal D'Souza of the Office of Science and Technology, but whose intended recipient is more likely indicated by the opening lines which approvingly quote Donald Trump: It is the policy of the United States to sustain and enhance America's global dominance in order to promote human flourishing, economic competitiveness, and national security. In case anyone was in any doubt as to whose ego is being assiduously stroked, the point is hammered home that OpenAI is on the side of America First a la Trump: OpenAI agrees with the Trump Administration that AI creates prosperity and freedom worth fighting for - especially for younger generations whose future will be shaped by how this Administration approaches AI. There is a threat to America at play here, argues OpenAI, couching its thesis in MAGA-friendly wording: As America's world-leading AI sector approaches Artificial General Intelligence (AGI), with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration's new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI...In advancing democratic AI, America is competing with a CCP determined to become the global leader by 2030. With "CCP-controlled China" generically lined up in its sights, OpenAI homes in on the recent DeepSeek mainstream breakthrough for particular opprobium: As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security, as DeepSeek faces requirements under Chinese law to comply with demands for user data and uses it to train more capable systems for the CCP's use. Their models also more willingly generate how-to's for illicit and harmful activities such as identity fraud and intellectual property theft, a reflection of how the CCP views violations of American IP rights as a feature, not a flaw. As I recall, Sam Altman regards AI hallucinations as a feature, not a flaw, so there is at least a consistency of language here. But in case the message hasn't hit home in the Oval Office yet, the pudding gets well-and-truly over-egged: While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The [planned US] AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans. To be fair, OpenAI isn't being exclusively Sinophobic in its contentions - it's none-too-happy about other non-US political regimes having the nerve to put in place rules that would repress its innovation through rigid copyright rules. It complains: The European Union, for one, has created "text and data mining exceptions" with broadly applicable "opt-outs" for any rights holder - meaning access to important AI inputs is less predictable and likely to become more difficult as the EU's regulations take shape. Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer entrants with limited budgets. The UK government is currently considering changes to its copyright regime. It has indicated that it prefers creating a data mining exception that allows rights holders to "reserve their rights," creating the same regulatory barriers to AI development that we see in the EU. Fancy that - rules that allow rights holders to reserve their rights - really, no quote marks should be needed here! - and protect them from IP ravagers. What is the world coming to? But this is where the real demands are made clear - if firms like OpenAI have to follow such rules and can't train models on copyrighted material, America won't be first. The People's Republic of China (PRC) won't respect IP rules, why should OpenAI have to, huh? It's. Not. Fair! Because: If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI...The Federal Government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models' ability to learn from copyrighted material Ah yes, note the mention of fair use there. OpenAI pleads for application of fair use doctrine to AI. But the version being used here is an abuse of the term. Fair use permits limited access to copyrighted material without permission, such as quotations, extracts or citations in articles. It does not cover wholesale, copy-and-paste harvesting of other people's IP, although you wouldn't know that from OpenAI's spin in it. This is not just about legal niceties, it states, it's far, far more important than that: Applying the fair use doctrine to AI is not only a matter of American competitiveness -- it's a matter of national security. With that in mind, OpenAI also appeals to Trump 2.0 to 'protect' it from attempts by individual US states to introduce legislation that would be "burdensome" and I imagine frankly might be considered by a certain mindset as un-American as "some of [them] are modelled on the European Union's regulation of AI." Bearing in mind the current MAGA party line that the European Union was set up specifically to undermine the US, that can't be good, can it, Mr President? Lahene's letter concludes: America always succeeds when it bets on American ingenuity. That may well be the case. But at which point 'ingenuity' morphs into a carte blanche for intellectual grand larceny is unclear to me. Look, I will at least give OpenAI credit of sorts for having the cohones to say out loud what doubtless a good many others are thinking, even if I find the supposed supporting argument to be specious and nakedly self-serving in the extreme. Then again, whether that's deliberate courage or just seeming out-of-touch-ness - like Altman's hallucinations comment - I'm not sure. You can read the whole OpenAI proposal document here and perhaps come to your own conclusion. Just bear in mind my grandmother's words. You should always listen to your granny.
[4]
AIUK - Turing Institute event hails UK AI innovation, but chooses back slapping and other BS instead of scrutiny
Things started so well. Jean Innes, CEO of the Alan Turing Institute - the UK's organization for AI and data science - opened the two-day AIUK conference in Westminster this week in upbeat style as she declared: We are experiencing an extraordinary pace of innovation, with recent breakthroughs in reasoning, on-device diffusion models opening up faster and lower cost, Large Language Models, and robots learning sophisticated tasks at dramatically increased speeds. Then she added a brief note of caution for any policymakers nearby who might be expecting rapid payback and automatic growth: History teaches us that when a technological revolution shakes the economy and generates huge potential, opportunities and risks, a disruptive and difficult process of learning and adaptation must take place - over decades - to realise the full potential of the new technology. We are at an early stage of this process and the choices we make now will define our social and economic prosperity for years to come. A timescale measured in decades, then, not an instant uplift out of a fiscal black hole. Even so, the UK is well positioned to capitalize on the technology's potential, she said, hailing a "unique combination of strengths", such as "world-leading researchers and universities", "world-leading AI startups and scale-ups", and "innovative large companies". Doubtless that is true. However, the US and China can make the same claims, and at much greater scale economically. Meanwhile, Brexit Britain faces some unique challenges - both politically, in terms of being a bridge between a protectionist America and its erstwhile allies in the EU, and domestically in terms of some entrenched problems. Not only that, but the UK's previous policy focus was all about AI safety, inclusion, and ethics: concepts that Trump v2.0 scrapped on day one of the Presidency, supported by certain Big Tech CEOs and chainsaw-wielding Fratbuddy-in-Chiefs. But back to those entrenched problems. As seen previous on diginomica passim, the UK has a thriving start-up community, which is well capitalised in terms of early-stage funding. But the obstacles to scaling those companies are legion. Among them are impatient, risk-averse growth capital; a small domestic market; datasets disconnected from the EU's 27 nations and their 450 million citizens; poor AI and energy infrastructures (unlike France, which has invested big in nuclear power); soaring across-the-board costs; ageing communications networks; bigger English-speaking markets overseas, and more. Some of those stem from a decade of austerity. These are just some of the reasons for the UK's ambitious start-ups often jumping ship to California, then listing and growing in the US - problems that don't just affect early-stage AI companies, but also those in quantum, Fintech, and more. It is also why some UK start-ups' preferred exit is into the welcoming arms of US Big Techs. As for why British homegrown talent is often snapped up too, well, UK AI wages are generally one-third of those in California. From America's perspective, therefore, the UK is primarily a source of good ideas and cheap expertise - a mixed blessing for the domestic big picture. Even so, Innes cited the UK as Europe's leading venture capital market, while playing up its civil society traditions, its history of "thoughtful and proportionate regulation of emerging technologies", and its "supportive government on the front foot". But all that is an over-simplification. Indeed, many think that the current Labour administration under Keir Starmer as Prime Minister is being too blindly supportive of the AI industry, and dismissive of the concerns of others, especially those of Britain's $160 billion creative sector as the UK Government seems determined to plough ahead with its proposals to tear up the UK's copyright conventions - opting artists into their work being used to train AIs. That is despite overwhelming media, creative-sector, and public opposition to the idea. What looks increasingly odd about those plans is that even the UK's AI industry opposes them. Trade organization UKAI published a report this month which described the proposals as unworkable, misguided, damaging, and divisive. A resounding 'no' to No 10's idea, then, arrived at in partnership with the creative sector. All of which begs the question: who has bent the Prime Minister's ear so far that he is prepared to ignore his own Copyright Consultation, the House of Lords' Communications and Digital Committee, the entire British media, the whole creative sector, and - bizarrely - the very industry he claims to be supporting by bulldozing the plans through? It's a good question. So, is the answer as simple as some mega-wealthy US Big Tech wanting to avoid being sued for historic IP theft? In the absence of a better explanation, that would appear to be the case. Can we perhaps infer some promise about inward investment if the UK does what it is told? If that is true, Starmer is being staggeringly naïve, and the Copyright Consultation is beginning to look like a whitewash. So, over to Feryal Clark, Parliamentary Under-Secretary of State at the Department for Science, Innovation and Technology (DSIT). Presenting the morning's keynote, she gave every impression of having supped a gallon of vendor Kool-Aid for breakfast, as she ventured: If we get this right, if academia and public and private sector all play their roles, we do best. So, what could that future look like? Here's what we could say about this country. Like most new technologies before it, AI has created a raft of new and exciting jobs, adding more jobs than it replaces. To clarify, Clark was not claiming that scenario is the reality today: UK unemployment rose to 4.4% in November, higher than predictions. Instead, she was setting out a vision of a nation transformed, one day, by artificial intelligence. In that imagined future, she continued: Our children's children are doing jobs we don't even know the names of yet, no longer weighed down by admin! And businesses are infinitely more productive [sic]." [Infinitely, no less!] People can focus on the parts of their jobs that impact the bottom line, but also genuinely bring them joy. The strain on the Health Service has eased [the government last week announced plans to scrap NHS England, as AI saves us months on each new drug discovery, and earlier diagnosis gives patients back years with their families. And with access to the world's knowledge at ordinary people's fingertips, life in the UK becomes more equal. Frankly, that sounds like a huge pile of utopian vaporware not a strategic vision. And while no-one would claim that such a future is impossible, it chafes against the reality of the past 30 years of relentless technology innovation, which has seen UK productivity growth slow to almost zero. If simply buying new technology created instant growth and productivity at a national scale, surely it would have happened by now? And that's not all - Clark's utopian dream sits uneasily alongside AI automating more and more creative tasks, rather than the tedious grunt work. It also ignores the many industries that are now handing junior jobs to AIs and filling the ranks with middle managers, who pull up the career ladder behind them. (AIs might do all our jobs cheaper and faster one day, but they won't pay taxes or spend money in shops.) She concluded: We know this future doesn't just happen if we press play and let time pass! It needs a supply of power and talent, careful handling on safety and ethics and a deliberate effort to make AI work for all in this country, and not just the lucky few. Progress is only possible with partnership! So, I hope the UK's AI community continues to tell the government what it needs, and to work with us to make all our AI futures - a future as storied as the past has been. This is a chapter we can only write together!" Heady, moving stuff. But as I noted above, the UK's AI industry is telling the government what it does and doesn't want, but No 10 isn't listening. Meanwhile, the 'lucky few' are currently the likes of Microsoft, OpenAI, Meta, Anthropic, Amazon, and Alphabet, who seem able to force policymakers to do their bidding. But those are American giants, not British start-ups. So, at a later panel on open-source development, open data, and AI - separate report to come - I put a question about this to Laura Gilbert, Head of the AI for Government program at the Ellison Institute of Technology at Oxford University. She is also a senior government advisor on AI policy and worked in Downing Street until January this year. If anyone has the Prime Minister's ear on AI, it is Gilbert. So, why is Starmer ignoring the Copyright Consultation and the views of the UK's AI industry itself on his proposals to change the law? How can the government claim to be supporting Britain's AI industry when its own trade body, UKAI, says the plans are unworkable? Incredibly, panel Chair Amanda Brock, CEO of open-source non-profit OpenUK, not only changed the wording of my question - making it into a general, soft-soap one about copyright - but also alerted the panel that it came from a journalist. Then - piling Pelion on Ossa - she diverted it to Sonia Cooper, Assistant General Counsel at one of the world's most litigious and IP-protective vendors, Microsoft, leaving government AI spokeswoman Gilbert sitting in silence. Given this unasked-for opportunity to set out her own views on copyright - instead of a government advisor explaining why No 10 is ignoring UKAI, the media, the public, and the creative communities as I asked - Microsoft's lawyer said: As we think about reforms to copyright law [who is we?], it's important that we consider what copyright is intended to protect and ensure that there are robust protections to support copyright in relation to the outputs that are created by AI. Because you don't want AI to be outputting anything that is an infringement, or that is used in an infringing way. And you don't want to weaken any existing protection that is there for rights holders in relation to the use of what would be a copy of their work. But that is not what the proposed changes to copyright law are about. They are about opting creatives into their work training AIs even if they don't want it to. I have previously explained [LINK] that an opt-out would be impossible to implement, a view that UKAI echoes in its report. It would mean the death of copyright in an AI age. But Cooper continued: I think what's actually important to think about [rather, than, say, the views of artists, media, public, and the UK's AI trade body] is to take a step back and think about the technical analysis that's involved when you're analysing data. And again, what copyright is intended to protect. If you look at the scope of copyright protection as it's set out, it is intended to protect the expression of an idea, but it's not intended to lock up the ideas, the facts, the information that is within a copyright work. In other words, those things that wealthy AI companies want access to for free, so they can stitch together automated digital competitors trained on the work of brilliant human minds, against creators' will? Microsoft's brief continued: That's really important, because it enables knowledge to be developed from being able to read from something, and then build on it. Ideas are something that you can protect through patent protection, but not through copyright protection. If you are a software company, yes! And that's precisely why Microsoft has a stockpile of 107,170 patents worldwide and will exercise its right to sue in the event of infringements. But you cannot patent a magazine, a novel, a song, a musical composition, an artwork, an academic report, an investigative news report, a photograph, a movie, a TV script, or a video: you can only protect those works with copyright! She concluded: If you are technically analysing something, it's necessary to take a technical copy of something in order to extract the unprotected elements. And I think we need to be very careful not to say that copyright prevents us from making those technical copies. Because essentially what you will end up doing is locking up the ideas, the knowledge in that information, that was never intended to be protected by copyright. Well now... First, it is not for Microsoft - a company built on IP, some of it homegrown, some of it acquired - to say what the intention is or was behind someone copyrighting a work. Frankly, it is none of that company's business. Secondly, Cooper is just plain wrong. Most creators would say that, yes, they very much want to protect their life's work and ideas, thank you, and not just hand them to a trillion-dollar corporation to monetize for its own revenue, profit, and market cap, while being absolved of historic theft in the process. Generative AI is then just industrialized data laundering. In conclusion, then, what a horrifying moment in the history of British AI. Here was an organization with a noble history, the Alan Turing Institute, hosting a UK Government minister saying she wanted to listen to the industry, while simultaneously ignoring the views of that industry as expressed in the UKAI report. If that was not bad enough, we then had a panel discussion - about openness, of all things - in which the CEO of OpenUK deliberately steered a fair and reasonable question away from a government AI advisor to a Microsoft lawyer, so she could explain why copyright is an outmoded concept when Big Tech wants Big Money. What a cynical and depressing spectacle this was: organizations that claim to want openness and discussion seem determined to evade scrutiny at every stage. So, what does this tell us about the future? Nothing good, alas. Above all, it signals that the UK Government just ain't listening, folks, and neither are its arms-length bodies and NGOs. Except to the demands of US Big Techs to change the law in their favor, to absolve them of responsibility for theft, and for ever having to pay for data. Frankly, everything about this stinks. Instead, Starmer's UK is in utopian marcomms mode, masquerading as decency, competence, and consultation. And in that ridiculous world, the only things being held at arm's length are scrutiny and common sense.
[5]
Hurrah for Hollywood! Tinsel Town takes the AI copyright fight to Washington
LLMs and generative AIs would just be a blank page and a blinking cursor without the mass of human data that has been scraped, sometimes illegally, to create the illusion of machine intelligence. As reported previously on diginomica, some critics now believe that generative AI is little more than industrialised data laundering. These realisations are at the heart of a fightback this week by Hollywood against the US Government's AI Action Plan, which was proposed by President Trump in January. The plan is expected to be delivered by July, with the request for input ongoing until then. In the US, over 400 studios, actors, production companies, directors, composers, sound designers, costume designers, film editors, and more - a snapshot of the jobs threatened by AI's advance - have signed an open letter to the President, urging him to protect US copyright law. America's arts and entertainment sectors support over 2.3 million jobs and $229 billion in annual wages, notes the letter, while playing a significant role in US soft power and cultural influence overseas. But the challenge is that Trump v2.0 - supported by AI vendor Elon Musk's DOGE project - is tearing up billions of dollars of soft-power programs in favour of hard power enforced by edict and tariff. Hollywood's letter explains: AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music and voices used to train AI models at the core of multibillion-dollar corporate valuations. [AI companies] are arguing for a special government exemption so they can freely exploit America's creative and knowledge industries, despite their substantial revenues and available funds. There is no reason to weaken or eliminate the copyright protections that have helped America flourish. Can't pay, won't pay - or vice versa? It is hard to believe that vendors lack the funds to pay for training data: between them, Microsoft, Amazon, Google, Apple, and NVIDIA alone are worth nearly $13 trillion - a figure equivalent to roughly half of US GDP. Put in that context, maybe they just don't want to pay? Hence the claim that innovation moves faster than sitting at table with angry humans. But proposals to tear up copyright conventions don't just affect the film business, the signatories write: Make no mistake: this issue goes well beyond the entertainment industry, as the right to train AI on all copyright-protected content impacts all of America's knowledge industries. When tech and AI companies demand unfettered access to all data and information, they're not just threatening movies, books, and music, but the work of all writers, publishers, photographers, scientists, architects, engineers, designers, doctors, software developers, and all other professionals who work with computers and generate intellectual property. These professions are the core of how we discover, learn, and share knowledge as a society and as a nation. This issue is not just about AI leadership or about economics and individual rights, but about America's continued leadership in creating and owning valuable intellectual property in every field. Powerful words. If nothing else, America is a set of ideas, but the US Government seems determined to tear them up. Vendors such as OpenAI are openly urging policymakers to allow the scraping of copyrighted data to train their models - not so much to enable future innovation, as they claim, but more to retrospectively legalize the theft of unlicensed pre-2023 data scraped from the internet, sometimes from known pirate sources.. As I previously reported, over a dozen copyright lawsuits are currently ongoing against AI vendors in the US. To date, the mood music suggests that plaintiffs are likely to win, as Thomson Reuters did against Ross Intelligence. In this light, AI companies' determination to change the law looks more like heading off payback at the pass: an old Hollywood scenario. OpenAI is among the vendors demanding freedom from regulatory restraints - while at the same time urging regulatory restraints on overseas competitors, like China's DeepSeek. DeepSeek is advancing on "commandeered resources", claims OpenAI. That's a bold statement from a vendor that is itself accused of advancing on commandeered resources in the shape of copyright holders' data. This month the company released what it suggested is an "economic blueprint" for America - surely evidence that this $340 billion company has ideas above its station: do as we say, not as we do. Its own letter to the White House it says: We are at the doorstep of the next leap in prosperity: the Intelligence Age. [...] But we must ensure that people have freedom of intelligence [...] freedom to access and benefit from AI as it advances, protected from both autocratic powers that would take people's freedoms away, and layers of laws and bureaucracy that would prevent our realizing them. OpenAI certainly knows how to pump the President's button - copyright theft as a fightback against "autocratic powers", has a MAGA-esque ring to it. Meanwhile Google, hardly the world's least litigious company when it comes to IP violations, has also written to the Government, demanding "balanced copyright rules", which it interprets as meaning allowing text and data mining [TDM] exceptions. It says: [These] have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. Well, maybe in some places, but not in countries where TDM is forbidden for commercial products. Nevertheless, Google claims that the use of "copyrighted, publicly available material" for AI training would allow innovation, without: significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Ah, pity the poor megacorp: imagine having to spend valuable time talking to someone! Public, private, pirate But the problem there is that content that is in a public domain - i.e. somewhere on the internet, including in the pirate datasets accessed by some AI vendors - is not the same in law as public-domain content. It's hard to believe that AI firms did not implicitly know that when training their models, or they would not have used pirated datasets in the first place, resources that hold millions of books, academic papers, reports, and other copyrighted content. More important, Google's claim that TDM exceptions will not harm rights holders doesn't stand up to much scrutiny. Unlike academic researchers, who are free to use copyrighted data to develop their own theses and research programs - and unlike any human who is free to read a book - AI companies are building commercial products directly from their data scraping. If an AI is prompted to produce a work in the style of a named composer, scriptwriter, or illustrator, for example, it becomes an instant, low-cost, effort-free, automated competitor to them. Clearly that is a harm: it devalues original work at source and turns it into a revenue stream for a software company. While the average American struggling to find pay for groceries and healthcare cares nothing about the IP of wealthy professionals like Sir Paul McCartney, Ben Stiller, Mark Ruffalo, Guillermo del Toro, Cate Blanchett, Phoebe Waller-Bridge, Bette Midler, Paul Simon, Ron Howard, Taika Waititi, Ayo Edebiri, Sam Mendes, and Chris Rock - all among the Hollywood fightback's signatories - countless creatives live on the margins, are self-employed, and are only as successful as their last commission. The Los Angeles Times has long been covering the AI stories that impact Hollywood. On 12 February it shared the case of local screenwriter John Rogers, who discovered that 77 entire episodes of his TV drama 'Leverage' - equivalent to five years of his work - had been scraped to train ChatGPT. The data set that included those scripts - plus others he had worked on - had been lifted from OpenSubtitles.org, a website that publishes the subtitles to movies and TV shows in multiple languages. As I explained last month, this proves that creator opt-outs - the preferred solution suggested by the British Government and others - are useless in a networked world. Rogers told the LA Times: I'm angry at the absolute arrogance of these companies. [They] have gotten hundreds of billions of dollars of value that would not exist if not for our work. It is impossible to argue with that perspective. However the LA Times then raised an interesting point: no lawsuits have been issued on writers' behalf by the film industry that is campaigning for fairness today. A cynic might suggest that film studios know that - in a world of streaming and falling box office - AI could allow them to stop paying for original work. Just get a chatbot to remix an old script, then prompt a generative AI to design the costumes, build virtual sets, shoot action set-pieces, score the movie, and create a virtual star - all based on creatives' unacknowledged, uncredited prior work. My take So, what is behind the breathtaking cynicism of some vendors? Beyond an ugly grab for power, wealth, and de facto gatekeeping of the world's digitized data, the answer may be found in a response by Sonia Cooper, Assistant General Counsel at Microsoft, to my question at the AIUK conference in London on Monday. It is to do with vendors' partial and skewed definition of what other people's copyright means. As she explained, a TDM exception would: Enable knowledge to be developed from being able to read from something, and then build on it. Ideas are something that you can protect through patent protection, but not through copyright protection. As I explained in my last report, you can patent a piece of software, but you can't patent a book. And for an AI vendor, a patent is the only IP that counts. Then she continued: If you are technically analyzing something, it's necessary to take a technical copy of something in order to extract the unprotected elements. And I think we need to be very careful not to say that copyright prevents us from making those technical copies. Because essentially what you will end up doing is locking up the ideas, the knowledge in that information, that was never intended to be protected by copyright." Except that is precisely what creators do want to protect; they do want to protect their own idea, and not just express their delight in having one, while trillion-dollar corporations earn all the money from it. And the only way they can do that is by copyright. That is why we must protect it. Even UKAI, Britain's trade industry for AI companies, agrees. But the voices of the super-powerful US sector shout louder - and carry the biggest stick. So, will there be a Hollywood happy ending for the world's human creatives? It's unlikely, alas, when politicians worldwide are in Big Tech's pocket. They are little more than trophy handkerchiefs.
Share
Share
Copy Link
OpenAI proposes relaxing copyright laws to train AI models, sparking debate over intellectual property rights and AI development in the US.
OpenAI, a leading artificial intelligence company, has submitted a controversial proposal to the US government, urging for the relaxation of copyright laws to facilitate AI training. This move has ignited a heated debate about intellectual property rights and the future of AI development in the United States 1.
OpenAI's proposal, submitted in response to President Trump's executive order on sustaining America's global dominance in AI, argues for "freedom of intelligence" 1. The company suggests that to reach the next milestone in AI development, dubbed "AGI" (Artificial General Intelligence), it needs protection from copyright holders to continue using vast amounts of data for training its models 1.
The proposal frames the issue as a matter of national competitiveness, particularly against China. OpenAI argues that Chinese AI companies have an advantage due to their alleged disregard for copyright laws, putting American AI labs at a "comparative disadvantage" 12. Critics, however, view this as an attempt to legalize what some call "data laundering" – the use of copyrighted material without proper attribution or compensation 2.
The AI industry is facing increasing scrutiny over copyright infringement. Several lawsuits have been filed against AI companies, including OpenAI, Microsoft, and others, by content creators and media organizations 3. These legal challenges highlight the growing tension between AI development and intellectual property rights.
In response to the US Government's AI Action Plan, over 400 Hollywood studios, actors, and other creative professionals have signed an open letter urging the protection of US copyright law 5. They argue that weakening copyright protections would undermine America's economic and cultural strength in the arts and entertainment sectors, which support over 2.million jobs and $229 billion in annual wages 5.
The debate extends beyond the US, with the European Union and the UK also grappling with how to regulate AI development while protecting intellectual property rights 3. The outcome of this debate could have far-reaching implications for the global AI industry and the future of content creation.
While AI companies argue that access to vast amounts of data is crucial for innovation, critics point out that these companies, often worth billions, have the means to pay for the content they use 5. The situation raises important questions about the balance between technological advancement and fair compensation for content creators.
As the US government considers its approach to AI regulation, the outcome of this debate will likely shape the future of AI development, intellectual property rights, and the relationship between tech companies and content creators. The decision will have significant implications for America's position in the global AI race and the protection of its creative industries 15.
Reference
[4]
UKAI, the UK's AI trade body, rejects proposed copyright law changes and advocates for transparency, collaboration, and fair solutions between AI and creative industries.
2 Sources
2 Sources
The US and UK are navigating complex AI regulatory landscapes, with the US imposing new export controls and the UK seeking a middle ground between US and EU approaches.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A comprehensive look at how AI shaped policy discussions, enterprise strategies, and marketing practices in 2023, highlighting both opportunities and concerns across various sectors.
7 Sources
7 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved