Curated by THEOUTPOST
On Tue, 14 Jan, 4:06 PM UTC
2 Sources
[1]
Biden's final act - imposing KYC burdens on the AI industry (maybe, sort of)
On his final day in office, US President Joe Biden authorized an Interim final rule to limit "Artificial Intelligence Diffusion" to 'bad' countries. The big deal here is the imposition of new Know-Your-Customer (KYC) requirements on the transfer of advanced chips AND model weights -- essentially more paperwork, bureaucracy, and burdens for companies selling chips or AI models. We've previously reported that companies that sell access to proprietary AI models like OpenAI and the various policy think tanks it has been funding seem to like these new controls since they narrow the competition for their proprietary AI. Unfortunately for their interests, they were unable to get "open-weighted," models added to the list, which would have probably not been practical. At the other end of the spectrum, chip design vendors like NVIDIA are unhappy with the new rules, which means more paperwork and less sales in the short run. The companies that manufacture these designs like TSMC in Taiwan, or the advanced lithography technology that makes the chip equipment like ASML in The Netherlands probably feel the same way, but they have not blogged about it yet, and besides they are not American and probably spend less on lobbying, so their opinions might not have as much weight. The devil is in the details. The new rules represent a significant step towards creating a KYC framework for AI proposed in the policy paper championed by OpenAI and allies above. They specifically enforce new controls on the export of hardware and closed-weight AI models trained above a particular threshold. However, no new limits are imposed on exporting data, algorithms, or open-weighted models, an essential ingredient in open source AI. The new rules also propose enabling global access to AI capabilities through APIs that promise to unlock beneficial uses of AI while mitigating national security risks like cyber attacks and developing new physical and biological weapons. It will also line the pockets of AI vendors selling access to proprietary models. Some of the essential elements of the ruling include: However, don't hold your breath that these new rules will go into effect in four months. An NVIDIA blog post by Ned Finkle, vice president of government affairs at NVIDIA, praises Donald Trump's record in strengthening AI edge: Although the rule is not enforceable for 120 days, it is already undercutting US interests. As the first Trump Administration demonstrated, America wins through innovation, competition and by sharing our technologies with the world -- not by retreating behind a wall of government overreach. We look forward to a return to policies that strengthen American leadership, bolster our economy and preserve our competitive edge in AI and beyond. And as Finkle argues: While cloaked in the guise of an "anti-China" measure, these rules would do nothing to enhance U.S. security. The new rules would control technology worldwide, including technology that is already widely available in mainstream gaming PCs and consumer hardware. Rather than mitigate any threat, the new Biden rules would only weaken America's global competitiveness, undermining the innovation that has kept the US ahead. More importantly, Trump has an opportunity to undo Biden's mistakes in undermining American AI leadership that threaten to squander America's hard-won technological advantage: In its last days in office, the Biden Administration seeks to undermine America's leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how America's leading semiconductors, computers, systems and even software are designed and marketed globally. And by attempting to rig market outcomes and stifle competition -- the lifeblood of innovation -- the Biden Administration's new rule threatens to squander America's hard-won technological advantage. A masterful appeal sure to flatter Trump's ego. However, we have yet to hear from Elon Musk, Trump's incoming Tech Bro in Chief, who might agree with OpenAI for once now that their interests are aligned around selling closed-weight AI models. A century-and-a-half ago, the leading European nations were imposing various controls designed to enhance their dominion over Africa and Asia's vast people and resources. That all ended with World War II and the subsequent Breton Woods agreement that enabled the US to impose a new global order enforced by its Navy and gave rise to the almighty dollar. (The EU is not happy with the latest Biden announcement, with a joint statement by Executive Vice-President Henna Virkkunen and Commissioner Maroš Šefčovic saying: We are concerned about the US measures adopted today restricting access to advanced AI chip exports for selected EU Member States and their companies. We believe it is also in the US economic and security interest that the EU buys advanced AI chips from the US without limitations: we cooperate closely, in particular in the field of security, and represent an economic opportunity for the US, not a security risk. We have already shared our concerns with the current US administration and we are looking forward to engaging constructively with the next US administration.) These days, the US is attempting the same trick with AI resources enforced by export controls imposed on its various AI protectorates. These countries have been encouraged to enact policies allowing American oligarchs to hoover up data, technology, and well money at an unprecedented and accelerating scale. In the book Vassal State, Angus Hanton writes that 25% of British GDP arises from sales to US multi-nationals operating in the UK. And the big AI companies are figuring out how to establish policies to extract even more of that. So, besides the UK, what other countries are the new rules inviting to be on the list of the US's expanding protectorate? They currently include Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, the Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, and Taiwan. Others might be invited if they commit to the new AI protectorate KYC requirements. The UK Government recently made a big splash about a new UK AI Opportunities-Action Plan, which all sounds good on paper. If it is successful, it may - one day, perhaps - result in lots of new data centers and AI research hubs. But if recent history is any guide, it will also result in exporting a lot of tax obligations, energy, and cutting-edge innovations to US-controlled multi-nationals. And those new data centers will probably require burning more of that North Sea Oil and siphoning off lots more water from an already strained water system to keep the lights on. If the new KYC rules do go into effect, they are likely to drive interest in open source AI approaches, fuel the growth of a new KYC accounting industry, fertilize a black market AI smuggling industry, and accelerate more efficient first principals approaches from China. For example, Chinese startup DeepSeek was estimated to train its latest foundation model for $6 million in computing that rivals the performance of current AI leaders trained at more than ten times that, and it's open source. Some of these knock-on effects may be good things in the long run, even if they add more paperwork and slow chip sales in the short run.
[2]
AI regulations - can the UK really play both sides of the US/EU divide or just slip down the gap in the middle?
As reported yesterday, the UK is setting out its stall to become an AI-enabled economy that can keep up with the likes of the US and China when it comes to the new technological revolution as Prime Minister Keir Starmer grabbed mainstream media headlines with a 50 point AI Opportunities Action Plan. Or at least he did in the UK. In other parts of the world - the US, China? - somewhat less so - and that matters. Brexit Britain is currently adapting to its new place in the world, outside of the European Union (EU) and trying to forge trade deals with governments all around the world, not least the US and China. So things like a national AI strategy do matter in an international context as, despite the post-EU independent status, the UK remains highly dependent on what other regimes are doing in areas such as technology policy and practice. In the case of AI, we have the EU's AI Act on the table and President Biden's AI Executive Order, for example - and they're not entirely compatible. Yesterday's announcement is not the first time the UK has been here with Prime Ministerial declarations of AI world dominance to come. As far back as 2018, Theresa May was also announcing plans to "supercharge" the UK sector to become a world leader, while in 2019, her successor, Boris Johnson pledged £250 million to make the UK National Health Service (NHS) a global AI healthcare champion. And Starmer's immediate predecessor, Rishi Sunak, made similar noises at at the 2023 AI Global Safety Summit in the UK, when he wasn't beating Donald Trump to the mark and nauseatingly fawning over Elon Musk. What Sunak did do that does differentiate him from Starmer's pitch this week was in his focus on AI safety and regulations as a primary issue on which he hoped to stake UK leadership and influence global standards and consensus. That was as likely in reality as a bag of cats ceasing to claw one another to pieces, but the principle was a welcome one. In contrast, the new AI Opportunities Action Plan from Starmer's Government, while making the politically correct noises about safety and security, appears to be advocating more risk-taking. So the text of the Plan says: The UK's current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this...Government must protect UK citizens from the most significant risks presented by AI and foster public trust in the technology, particularly considering the interests of marginalised groups. But it goes on: That said, we must do this without blocking the path towards AI's transformative potential. This includes keeping regulators in check, it seems: Individual regulators may still lack the incentives to promote innovation at the scale of the government's ambition. If evidence demonstrates that is the case, government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy. It argues: Government should also have the self-confidence and ambition to set an example for the rest of the economy. This will require a novel approach involving close collaboration with industry to ensure the whole of society can benefit from the opportunities offered by AI. Business-as-usual is not an option. Instead, government will need to be prepared to absorb some risk in the context of uncertainty. OK, so that's an interesting balancing act to pull off, particularly when you need to sell the Great British Public on the idea that it's going to be OK to sell off more and more personal healthcare data from the NHS to private (often US) AI model builders, a move that will be met with skepticism in many quarters as another step towards the UK political taboo of privatising the free-at-the-point-of-care service set up in post-WWII Britain. Starmer was asked about his regulatory intentions at the formal announcement of the Plan, framed as a question about whether being outside of the EU provided more wiggle-room, a so-called 'Brexit Benefit' (for which there is an ongoing hunt akin to that of the Holy Grail in UK political circles). HIs response was telling: I think it is important to recognise that we've got the freedom now in relation to regulation to do it in the way that we think is best for the UK - and that's what we intend to do. There are different models around the world - there's a sort of EU approach and a US approach, but we have the ability to choose the one that we think is in our best interests, and we intend to do so. That sounds terribly appealing in one respect and ludicrously idealistic in another. The idea of a regulatory smorgasbord where the UK gets to pick and choose which regime or aspects of which regime it chooses to follow is a path fraught with danger. According to World Bank and International Monetary Fund (IMF) data as of late 2024, the US and the EU make up 44.13% and 29.4% of global GDP respectively. Picking and choosing who to snuggle up to on an opportunistic basis might sound like a good idea, but is it remotely practical? The UK's biggest market for exports remains the EU although the post-Brexit consequence of becoming a 'third nation' outside of the Single Market has reduced the size of that number. While Starmer's Government has talked about securing closer alignment with the EU, it has ruled out re-joining the bloc. As such, the EU will have its own price tag for dealing with the UK and this can take the form of adherence to standards and regulations in order to do business. For evidence of the kind of hard ball that EU is willing to play, look no further than data protection regulation in general and GDPR in particular. Post-Brexit vote, the then Conservative Government was openly talking about watering down the requirements of GDPR, to be replaced by a similar 'pick and choose' regime that Starmer now looks to be talking about for AI. Not if you want to do business with us, was the blunt warning shot from Brussels - you're all in, or you're out. On the table was the threat of the removal of the existing data adequacy arrangements between the UK and Europe needed to ease digital international trading. The New Economics Foundation and UCL European Institute warned back in 2020 that British companies would face a bill of up to £1.6 billion without that deal in place. Not surprisingly then, the much-vaunted "truly bespoke, British form of data protection" regime never emerged, despite ongoing attempts to tinker with the current situation. Data protection also gives us a useful idea of what the US stance might be on AI regulation. As we've noted passim, the US has no Federal data protection law and little sign of one coming over the Hill any time soon. Consensus remains impossible to reach and it's left to individual states to determine what they do. Big Government has no part to play here yet - and events are such that that's not about to change for at least four years. That has AI regulatory implications for the UK and the rest of the world. In a few days time, Trump 2.0 takes power. The returning President has made no secret of the importance he attaches to personal loyalty and his attitude towards those who don't deliver it. Trying to appease the EU and the US and come up with its own 'third way' looks very much like a challenge too far for the UK. Trump is also hugely anti-EU, pitching the idea to the MAGA faithful that it was set up in order to do down the US. While there is considerable evidence that there are those in Europe who are overly keen on 'taming' American market dominance through regulation, the idea is a fantasy. (EU regulatory zealotry, it might be argued, does as much, if not more, harm to Europe than it does to the US tech sector.) Nonetheless, based on his previous turn in the Oval Office, we have a pretty clear idea of just how bad relations between the EU and the US might get over the course of the next four years. Against such a backdrop, Trump isn't going to entertain a 'play both sides' approach to AI regulation, certainly not from a Labour - socialist , for which read communist in MAGA-speak - government, one that his musky Fratbuddy-in-Chief is openly calling for the overthrow of. So, it's going to be a case of pick a side for the UK - or drop down through the middle and consign itself to mediocrity as the big blocs battle it out. So, what's it to be? Tough regulation like the EU AI Act or the woolier restrictions of the US AI Executive Order, which may itself be rapidly unpicked if the promised 'bonfire of the Bidens' regulatory purge takes place in Trump's first 100 days. It's a unenviable choice, but one that the UK will have to face up to sooner rather than later if the grand ambition of its AI Opportunities Action Plan is to be realised. This is the nation of Babbage, Turing and Lovelace - driving change is in our DNA. Already, Britain is the third largest AI market in the world. Maybe so, Prime Minister Starmer, but the immediate position on regulation being taken (this week) by the current UK Government is essentially a bland holding position, totally lacking in detail or commitment: Ensuring we have the right regulatory regime that addresses risks and actively supports innovation will drive AI trust and adoption across the economy. The government will set out its approach on AI regulation and will act to ensure that we have a competitive copyright regime that supports both our AI sector and the creative industries. That's politically pragmatic perhaps, but just kicks matters down the track for another day. It also erodes the UK's claim to leadership in this area even as it stakes another claim to becoming an AI superpower. There's much criticism that can be laid at the door of previous administrations, but it was less than 18 months ago that Rishi Sunak was telling the world as he announced the opening of the world's first AI Safety Institute in the UK: Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies...we will make the work of our Safety Institute available to the world. That's the right thing to do morally, in keeping with the UK's historic role on the international stage. The trouble with history is, it's all in the past. The future lies this way...
Share
Share
Copy Link
The US and UK are navigating complex AI regulatory landscapes, with the US imposing new export controls and the UK seeking a middle ground between US and EU approaches.
In a significant move on his final day in office, US President Joe Biden authorized an interim final rule aimed at limiting "Artificial Intelligence Diffusion" to certain countries 1. The new regulations introduce Know-Your-Customer (KYC) requirements for the transfer of advanced chips and AI model weights, potentially creating more bureaucratic hurdles for companies in the AI industry.
The new rules have elicited mixed responses from various sectors of the tech industry. Companies like OpenAI, which sell access to proprietary AI models, appear to support these controls as they may limit competition 1. However, chip design vendors such as NVIDIA have expressed dissatisfaction, anticipating increased paperwork and potential sales reductions in the short term.
The regulations focus on controlling the export of hardware and closed-weight AI models trained above specific thresholds. Notably, they do not impose new limits on exporting data, algorithms, or open-weighted models, which are crucial for open-source AI development 1. The rules also propose enabling global access to AI capabilities through APIs, aiming to balance beneficial AI uses with national security concerns.
With the upcoming change in administration, the future of these regulations remains uncertain. NVIDIA's vice president of government affairs, Ned Finkle, has already appealed to the incoming administration, praising Donald Trump's previous approach to AI and arguing that the new rules could undermine America's global competitiveness 1.
Meanwhile, the UK is charting its own course in AI regulation, attempting to position itself as a leader in the field while navigating the complex landscape between US and EU approaches 2. Prime Minister Keir Starmer recently announced a 50-point AI Opportunities Action Plan, emphasizing a pro-innovation stance while acknowledging the need for safety and security measures.
The UK government's plan advocates for a more risk-tolerant approach, suggesting that regulators may need to be more permissive to promote innovation 2. This stance contrasts with the previous administration's focus on AI safety and regulations as primary issues for UK leadership.
The UK faces a significant challenge in aligning its AI regulations with those of major economic powers like the US and EU. Starmer's government has suggested the possibility of choosing regulatory approaches that best suit UK interests 2. However, this strategy may prove difficult given the economic influence of the US and EU, which together account for over 70% of global GDP.
The UK's position outside the EU Single Market complicates its regulatory choices, particularly in areas like data protection. The EU's stance on GDPR compliance for third countries serves as a reminder of the potential challenges the UK may face in maintaining market access while pursuing an independent regulatory path 2.
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
OpenAI proposes relaxing copyright laws to train AI models, sparking debate over intellectual property rights and AI development in the US.
5 Sources
5 Sources
UK Labour Party unveils AI regulation plans while UiPath partners with academia for AI innovation. The stories highlight the balance between AI advancement and responsible development.
2 Sources
2 Sources
UKAI, the UK's AI trade body, rejects proposed copyright law changes and advocates for transparency, collaboration, and fair solutions between AI and creative industries.
2 Sources
2 Sources
A comprehensive look at how AI shaped policy discussions, enterprise strategies, and marketing practices in 2023, highlighting both opportunities and concerns across various sectors.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved