2 Sources
[1]
Opinion | Trump's Plans for A.I. Might Hit a Wall. Thank Europe.
Ms. Bradford is an expert on the European Union, global trade and digital regulation. President Trump wants to unleash American A.I. companies on the world. For the United States to win the unfolding A.I. arms race, his logic goes, tech companies should be unfettered by regulations and free to develop artificial intelligence technology as they generally see fit. He is convinced that the benefits of American supremacy in this technology outweigh the risks of ungoverned A.I., which experts warn could include heightened surveillance, disinformation or even an existential threat to humanity. This conviction is at the heart of the administration's recently unveiled A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development. But Mr. Trump can't single-handedly protect American A.I. companies from regulation. Washington may be able to eliminate the rules of the road at home, but it can't do so for the rest of the world. If American companies want to operate in international markets, they must follow the rules of those markets. That means that the European Union, an enormous market that is committed to regulating A.I., could well thwart Mr. Trump's techno-optimist vision of a world dominated by self-regulated, free-market U.S. companies. In the past, the E.U.'s digital regulations have resonated well beyond the continent, with technology companies extending those rules across their global operations in a phenomenon I have termed the Brussels Effect. Companies like Apple and Microsoft now broadly use the E.U.'s General Data Protection Regulation, which gives users more control over their data, as their global privacy standard in part because it is too costly and cumbersome for them to follow different privacy policies in each market. Other governments also often look to E.U. rules when drafting their own laws regulating the tech sector. The same phenomenon could at least partly hold for A.I. technology. Over the past decade, the E.U. has put in place a number of regulations aimed at balancing A.I. innovation, transparency and accountability. Most important is the A.I. Act, the world's first comprehensive and binding artificial intelligence law, which entered into force in August 2024. The act establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked. This law, for instance, restricts the use of facial recognition technology for surveillance and limits the use of potentially biased artificial intelligence for hiring or credit decisions. American developers looking to get access to the European market will have to comply with these rules and others. Some companies are already pushing back. Meta has accused the E.U. of overreach and even sought the Trump administration's help in opposing Europe's regulatory ambitions. But other companies, such as OpenAI, Google and Microsoft, are signing on to Europe's A.I. code of practice. These tech giants see an opportunity: Playing nice with the European Union could help build trust among users, pre-empt other regulatory challenges and streamline their policies around the world. Individual American states looking to govern A.I., too, could use E.U. rules as a template when writing their own bills, as California did when developing its privacy laws. By holding its ground, Europe can steer global A.I. development toward models that protect fundamental rights, ensure fairness and don't undermine democracy. Standing firm would also boost Europe's tech sector by creating fairer competition between foreign and European A.I. firms, which have to abide by E.U. laws. For this to happen, Europe must withstand mounting pressure to abandon its regulatory role. Mr. Trump has frequently accused Europe of maintaining trade and digital policies that unfairly target American companies. This year, Vice President JD Vance called the A.I. Act "excessive" and warned that overregulation deters potential innovators, and the Republican-led House Judiciary Committee accused Europe of using content-moderation rules as instruments of censorship. And European policymakers worry that Washington could impose further tariffs or withdraw security guarantees if Europe doesn't leave tech companies alone. Europe has been adamant that the A.I. Act and other digital rules are not up for negotiation. As part of the recent U.S.-E.U. trade deal, Brussels agreed to buy more American energy and military equipment, but did not make concessions on tech regulation. Europe's lawmakers know that discarding these rules would be politically costly, given the widespread support for the continent's digital laws. Capitulating to Mr. Trump would make the E.U. look weak, both externally and internally. And any deal that would scrap A.I. governance would be subject to the whims of the Trump administration. Europe must also stand up to threats from within. Some European policymakers have been wringing their hands about regulation since the publication of a landmark review of European competitiveness, known as the Draghi report -- which, among other things, criticizes Europe's slow A.I. development and identifies onerous regulation as an impediment to technological innovation. Driven by a legitimate desire to help Europe rebuild its technological sovereignty, a growing number of European companies and lawmakers are pushing to relax the European Union's A.I. rules. A.I. regulation and innovation are not mutually exclusive goals. Europe has lagged behind the United States and China in the A.I. race because of foundational weaknesses in the European technological ecosystem -- fragmented digital and capital markets, punitive bankruptcy laws and an inability to attract global talent, among other problems -- not because of digital regulations. Even China subjects its A.I. developers to binding rules. Some conditions, like a mandate that generative A.I. tools not undermine China's censorship regime, reflect Beijing's authoritarian agenda. But other guardrails aimed at safety, fairness and transparency, such as a policy that training data not infringe on intellectual property rights, suggest that Beijing does not view A.I. governance as an obstacle to innovation. Indeed, Mr. Trump's deregulatory agenda looks increasingly like the exception among the world's biggest democracies, not the rule. South Korea recently passed a version of the A.I. Act, and other countries, including Australia, Brazil, Canada and India, are working on artificial intelligence laws aimed at mitigating the risks of the technology. The American retreat from A.I. governance is a blow for those who are concerned about the individual and societal risks of artificial intelligence. That retreat undermines the E.U.'s previous collaboration on digital policies with the United States and gives an opening for China and other autocracies to promote their authoritarian digital norms. But it is also an opportunity for Europe to take a leading role in shaping the technology of the future -- one it should embrace, not abandon out of appeasement or misplaced fear. Anu Bradford is a professor at Columbia Law School. Her books include "The Brussels Effect: How the European Union Rules the World" and "Digital Empires: The Global Battle to Regulate Technology." The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[2]
Trump wants to let AI run wild. This might stop him
Some US companies resist, others comply. Europe can steer AI development towards rights, fairness, and democracy. Europe must resist pressure to abandon its regulatory role. President Trump wants to unleash American A.I. companies on the world. For the United States to win the unfolding A.I. arms race, his logic goes, tech companies should be unfettered by regulations and free to develop artificial intelligence technology as they generally see fit. He is convinced that the benefits of American supremacy in this technology outweigh the risks of ungoverned A.I., which experts warn could include heightened surveillance, disinformation or even an existential threat to humanity. This conviction is at the heart of the administration's recently unveiled A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development. But Mr. Trump can't single-handedly protect American A.I. companies from regulation. Washington may be able to eliminate the rules of the road at home, but it can't do so for the rest of the world. If American companies want to operate in international markets, they must follow the rules of those markets. That means that the European Union, an enormous market that is committed to regulating A.I., could well thwart Mr. Trump's techno-optimist vision of a world dominated by self-regulated, free-market U.S. companies. In the past, the E.U.'s digital regulations have resonated well beyond the continent, with technology companies extending those rules across their global operations in a phenomenon I have termed the Brussels Effect. Companies like Apple and Microsoft now broadly use the E.U.'s General Data Protection Regulation, which gives users more control over their data, as their global privacy standard in part because it is too costly and cumbersome for them to follow different privacy policies in each market. Other governments also often look to E.U. rules when drafting their own laws regulating the tech sector. The same phenomenon could at least partly hold for A.I. technology. Over the past decade, the E.U. has put in place a number of regulations aimed at balancing A.I. innovation, transparency and accountability. Most important is the A.I. Act, the world's first comprehensive and binding artificial intelligence law, which entered into force in August 2024. The act establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked. This law, for instance, restricts the use of facial recognition technology for surveillance and limits the use of potentially biased artificial intelligence for hiring or credit decisions. American developers looking to get access to the European market will have to comply with these rules and others. Some companies are already pushing back. Meta has accused the E.U. of overreach and even sought the Trump administration's help in opposing Europe's regulatory ambitions. But other companies, such as OpenAI, Google and Microsoft, are signing on to Europe's A.I. code of practice. These tech giants see an opportunity: Playing nice with the European Union could help build trust among users, pre-empt other regulatory challenges and streamline their policies around the world. Individual American states looking to govern A.I., too, could use E.U. rules as a template when writing their own bills, as California did when developing its privacy laws. By holding its ground, Europe can steer global A.I. development toward models that protect fundamental rights, ensure fairness and don't undermine democracy. Standing firm would also boost Europe's tech sector by creating fairer competition between foreign and European A.I. firms, which have to abide by E.U. laws. For this to happen, Europe must withstand mounting pressure to abandon its regulatory role. Mr. Trump has frequently accused Europe of maintaining trade and digital policies that unfairly target American companies. This year, Vice President JD Vance called the A.I. Act "excessive" and warned that overregulation deters potential innovators, and the Republican-led House Judiciary Committee accused Europe of using content-moderation rules as instruments of censorship. And European policymakers worry that Washington could impose further tariffs or withdraw security guarantees if Europe doesn't leave tech companies alone. Europe has been adamant that the A.I. Act and other digital rules are not up for negotiation. As part of the recent U.S.-E.U. trade deal, Brussels agreed to buy more American energy and military equipment, but did not make concessions on tech regulation. Europe's lawmakers know that discarding these rules would be politically costly, given the widespread support for the continent's digital laws. Capitulating to Mr. Trump would make the E.U. look weak, both externally and internally. And any deal that would scrap A.I. governance would be subject to the whims of the Trump administration. Europe must also stand up to threats from within. Some European policymakers have been wringing their hands about regulation since the publication of a landmark review of European competitiveness, known as the Draghi report -- which, among other things, criticizes Europe's slow A.I. development and identifies onerous regulation as an impediment to technological innovation. Driven by a legitimate desire to help Europe rebuild its technological sovereignty, a growing number of European companies and lawmakers are pushing to relax the European Union's A.I. rules. A.I. regulation and innovation are not mutually exclusive goals. Europe has lagged behind the United States and China in the A.I. race because of foundational weaknesses in the European technological ecosystem -- fragmented digital and capital markets, punitive bankruptcy laws and an inability to attract global talent, among other problems -- not because of digital regulations. Even China subjects its A.I. developers to binding rules. Some conditions, like a mandate that generative A.I. tools not undermine China's censorship regime, reflect Beijing's authoritarian agenda. But other guardrails aimed at safety, fairness and transparency, such as a policy that training data not infringe on intellectual property rights, suggest that Beijing does not view A.I. governance as an obstacle to innovation. Indeed, Mr. Trump's deregulatory agenda looks increasingly like the exception among the world's biggest democracies, not the rule. South Korea recently passed a version of the A.I. Act, and other countries, including Australia, Brazil, Canada and India, are working on artificial intelligence laws aimed at mitigating the risks of the technology. The American retreat from A.I. governance is a blow for those who are concerned about the individual and societal risks of artificial intelligence. That retreat undermines the E.U.'s previous collaboration on digital policies with the United States and gives an opening for China and other autocracies to promote their authoritarian digital norms. But it is also an opportunity for Europe to take a leading role in shaping the technology of the future -- one it should embrace, not abandon out of appeasement or misplaced fear.
Share
Copy Link
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
President Trump's ambitious plan to unleash American AI companies on the world stage by rolling back regulations is facing an unexpected hurdle: the European Union's comprehensive AI regulations. The Trump administration's recently unveiled AI Action Plan aims to eliminate what it perceives as paralyzing red tape, believing that American supremacy in AI technology outweighs the risks of ungoverned AI development 12.
Source: The New York Times
While Washington may be able to deregulate AI at home, it cannot dictate the rules for international markets. The EU, with its enormous market and commitment to regulating AI, could potentially thwart Trump's vision of a world dominated by self-regulated, free-market US companies. This phenomenon, termed the "Brussels Effect," has previously seen EU digital regulations resonating beyond the continent, with tech giants like Apple and Microsoft adopting EU standards globally due to cost-effectiveness 12.
The EU's AI Act, which came into force in August 2024, is the world's first comprehensive and binding artificial intelligence law. It establishes guardrails against potential AI risks, including privacy loss, discrimination, and disinformation. The act restricts facial recognition technology for surveillance and limits potentially biased AI in hiring and credit decisions 12.
Some US tech companies are pushing back against EU regulations. Meta has accused the EU of overreach and sought the Trump administration's help in opposing Europe's regulatory ambitions. However, other tech giants like OpenAI, Google, and Microsoft are signing on to Europe's AI code of practice, seeing an opportunity to build trust, pre-empt regulatory challenges, and streamline their global policies 12.
Europe remains adamant that its AI Act and other digital rules are not up for negotiation, despite pressure from the Trump administration. The EU has withstood threats of tariffs and security guarantee withdrawals, recognizing the political cost of discarding these widely supported digital laws 12.
However, Europe also faces internal challenges. The Draghi report, a landmark review of European competitiveness, has criticized Europe's slow AI development and identified regulation as an impediment to innovation. This has led to some European policymakers and companies pushing to relax the EU's AI rules 12.
Source: Economic Times
Proponents of EU regulations argue that AI governance and innovation are not mutually exclusive goals. They contend that Europe's lag in the AI race is due to foundational weaknesses in its technological ecosystem, such as fragmented digital and capital markets, rather than digital regulations 1.
As the global AI landscape continues to evolve, the clash between Trump's deregulation vision and Europe's regulatory approach will likely shape the future of AI development and governance worldwide. The outcome of this regulatory tug-of-war could have far-reaching implications for AI innovation, user privacy, and the balance of power in the global tech industry.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
10 hrs ago
9 Sources
Technology
10 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
10 hrs ago
4 Sources
Technology
10 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago
China has mandated that domestic data centers source over 50% of their chips from Chinese producers, aiming to reduce reliance on foreign semiconductors. This move poses challenges for NVIDIA's market position and highlights China's push for technological self-sufficiency in AI development.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago