13 Sources
13 Sources
[1]
EU weighs pausing parts of landmark AI act in face of US and Big Tech pressure
The European Commission is proposing a pause to parts of its landmark artificial intelligence laws amid intense pressure from Big Tech companies and the US government. Brussels is set to water down part of its digital rule book, including its AI act that entered into force last year, in a decision on a so-called simplification package on November 19. The move reflects EU efforts to make the bloc more competitive against the US and China. The draft proposal comes amid a broader debate over how aggressively the bloc should enforce its digital rules in the face of a fierce backlash from Big Tech companies supported by US President Donald Trump. The bloc has faced fierce pressure from the US government and Big Tech as well as European groups over its AI act, considered the world's strictest regime regulating the development of the fast-developing technology. Fears of provoking Trump into cutting off intelligence or weapon supplies to Ukraine or starting a transatlantic trade war with the bloc saw Brussels agree a provisional trade deal in August, but EU officials are wary of any moves that could provoke the White House into retaliatory measures. The EU had been "engaging" with the Trump administration on adjustments to the AI act and other digital regulations as part of its wider simplification process, a senior EU official told the Financial Times. While the legislation entered into force in August 2024, many of its provisions only come into effect in upcoming years. The bulk of the provisions for high-risk AI systems, which can pose "serious risks" to health, safety or citizens' fundamental rights, are set to come into effect in August 2026. In the draft proposal, seen by the FT, the commission is considering giving companies breaching the rules on the highest-risk AI use a "grace period" of one year. The draft proposal was still subject to informal discussions within the commission and with European capitals and could still change ahead of its adoption on November 19, officials said. Once the commission puts forward its proposal, it will still have to be approved by a majority of EU countries and the European parliament. Providers of generative AI systems that already placed their systems on the market before the implementation date could thus earn a one-year pause from the laws "to provide sufficient time . . . to adapt their practices within a reasonable time without disrupting the market". Brussels is also suggesting that it delay imposing fines for violations of its new AI transparency rules until August 2027 to "provide sufficient time for adaptation of providers and deployers of AI systems" to implement the obligations. The draft also looks to make the compliance burden for companies easier and centralise enforcement through its own AI office. A number of companies, including Facebook and Instagram owner Meta, have warned that the EU's approach to regulating AI risks cutting the continent off from accessing cutting-edge services. A spokesperson said talks were still ongoing within the commission regarding potential delays to "the implementation of targeted parts of the AI act" and that "various options are being considered". The spokesperson added the bloc remained "fully behind the AI Act and its objectives".
[2]
European Commission mulls AI Act delays in face of Trump and business pressure
Spokesperson says 'reflection is still ongoing' over whether to push back 'targeted parts' of landmark legislation The European Commission is considering plans to delay parts of the EU's landmark Artificial Intelligence Act, after intense pressure from businesses and Donald Trump's administration. The commission confirmed that "a reflection is still ongoing" on delaying aspects of the act, after media reports that it was weighing changes to the law with the aim of easing demands on companies. The EU's act, the first comprehensive legislation in the world regulating artificial intelligence, came into force in 2024, but many of its provisions do not yet apply. Most obligations on companies developing high-risk AI systems that "pose serious risks to health, safety or fundamental rights" are not due to come into effect until August 2026 or 2027. According to the Financial Times, the commission is considering giving a one-year "grace period" to companies breaching the rules on the highest-risk AI. Providers of generative AI - systems that can produce content, such as text or images - who have already placed products on the market before the implementation date could be granted a one-year pause from the laws "to provide sufficient time ... to adapt their practices within a reasonable time without disrupting the market", stated an internal document cited by the FT. The commission is also considering delays to imposing fines for violations of its new AI transparency rules until August 2027 to "provide sufficient time for adaptation of providers and deployers of AI systems" to implement the obligations, the paper reported. Also being studied is greater flexibility for AI developers of high-risk systems over monitoring performance of products on the market, by allowing them to follow guidance that would be less prescriptive than the system originally envisaged, according to MLex, which first reported on the planned amendments to the act. The proposals could change before their expected release on 19 November. Once published, they would then have to be agreed by EU member states and the European parliament. The EU has come under repeated pressure from the Trump administration to weaken regulation of tech companies. The US president recently threatened to impose tariffs on countries with tech regulations or digital taxes that he deemed to be "designed to harm or discriminate against American technology". Earlier this year Meta announced it would not be signing the commission's code of practice for general-purpose AI models. "Europe is heading down the wrong path on AI," wrote the company's chief global affairs officer, Joel Kaplan, who contended that the code introduced "legal uncertainties" for model developers, as well as measures that went "far beyond the scope of the AI Act". But it is not just US companies that have complained about Europe's regulation of the fast-evolving technology. Dozens of European companies have urged a two-year pause on the act to allow time for "reasonable implementation" and "further simplification of the new rules". An open letter signed by the heads of 46 companies, including Airbus, Lufthansa and Mercedes-Benz, said such a delay "would send innovators and investors around the world a strong signal that Europe is serious about its simplification and competitiveness agenda". The European Commission spokesperson Thomas Regnier said "a reflection is still ongoing within the commission" on potential delays to implementation of "targeted parts of the AI Act". No decision had been taken, he said, and the commission would "always remain fully behind the AI Act and its objectives". The commission, he added, had "constant contacts with our partners around the globe" but it was "not for a third country to decide how we legislate. This is our sovereign right."
[3]
EU eyes tweaks to AI law to heed tech industry 'concerns'
The European Union said Friday it is considering adjustments to its landmark artificial intelligence law, the AI Act, after tech firms and several member states raised concerns. The flagship AI Act entered into force last year but its obligations will kick in over several years -- with pressure mounting on Brussels to delay or review some to ease the burden on the industry. "We are hearing concerns from the industry and from our member states," European Commission digital spokesman Thomas Regnier told reporters. The EU executive is expected on November 19 to unveil a broader package of measures to simplify digital legislation and cut red tape, which Regnier said "would be the appropriate framework to address some of these concerns." "No decision has been taken at this stage, but a reflection is of course ongoing," he said. Asked about pressure from the United States, where industry giants and President Donald Trump's administration have pushed for softer touch EU tech regulations, Regnier said the bloc remained committed to enforcing its rules. "We are fully behind the AI Act, and no pressure from anywhere will impact us," he said. Possible adjustments could include extending implementation deadlines to give companies more time to comply. Henna Virkkunen, the commission's digital chief, had already floated the idea of easing compliance earlier this year, suggesting that some flexibility might be needed to help businesses adapt.
[4]
EU moves to weaken landmark AI Act amid pressure from Trump and U.S. tech giants, according to news report | Fortune
The European Union is considering watering down its flagship AI Act following backlash from Big Tech companies and the US government, according to a report in the Financial Times which cited a draft document outlining the proposed changes that it had seen and an interview with an unnamed senior EU official. The proposed changes are part of the European Commission's recently announced "simplification agenda" and "efforts to create a more favorable business environment" within the bloc. In September, the European Commission opened a call for evidence in an effort to collect research on how to simplify its legislation around data, cybersecurity, and artificial intelligence (AI). The unnamed senior EU official told the Financial Times that Brussels has been "engaging" with the Trump administration on potential adjustments to the AI Act and other digital regulations as part of a broader effort to simplify the legislative framework. Representatives for the European Commission told Fortune the commission "will always remain fully behind the AI Act and its objectives." "When it comes to potentially delaying the implementation of targeted parts of the AI Act, a reflection is still ongoing within the Commission," Thomas Regnier, a Commission spokesperson, said in a statement. "Various options are being considered, but no formal decision has been taken at this stage." Some of the proposed changes are set to affect the EU's landmark AI Act, one of the strictest pieces of AI regulation in the world. Passed in 2024, the act bans certain uses of AI, such as social scoring and real-time facial recognition, and imposes strict rules on the use of AI in areas deemed "high-risk" such as healthcare, policing, and employment. It applies not only to companies within the EU but also to any firm offering AI products or services to Europeans. It also imposes strict transparency requirements on global firms and punishes violations of the law with heavy fines. Under a draft proposal reviewed by the Financial Times, companies that have deployed so-called high-risk AI systems could receive a one-year "grace period" before enforcement begins. The delay would allow firms in these high-risk domains that are already deploying AI to make adjustments "without disrupting the market," according to the draft document. The proposal, which remains under internal discussion within the Commission and with EU member states, could still be amended before its expected adoption on November 19. Even once finalized, it would need approval from a majority of EU countries and the European Parliament before being put into practice. The Commission is also considering postponing the start date for penalties related to transparency violations under the new AI Act. If approved, fines for non-compliance would not take effect until August 2027, giving companies and AI developers "sufficient time" to adjust to the new obligations. The Act has been criticized by tech companies and startups, which argue that its rules are overly complex and risk stifling innovation in Europe by creating high compliance costs and bureaucratic hurdles. Global tech firms, including Meta and Alphabet, have warned that the Act's broad definitions of "high-risk" AI could discourage experimentation and make it harder for smaller developers to compete. The Trump administration has also been critical of Europe's regulatory approach to AI. At the Paris AI Summit earlier this year, U.S. Vice President J.D. Vance publicly warned that "excessive regulation" of AI in Europe could cripple the emerging industry, in a rebuke to European efforts, including the AI Act. In contrast, the Trump administration has taken a relatively light-touch approach to AI regulation, arguing instead that innovation should be prioritized amid a global AI arms race with China. Most U.S. AI regulation is being passed at the state level, with California adopting some of the strictest rules for the emerging tech.
[5]
EU Tech Chief eyes AI Act amendments to create legal certainty
A digital simplification package to be presented next week will likely ease the burden on AI companies, Tech Commissioner Henna Virkkunen tells Euronews. The European Commission is planning "targeted amendments" to the bloc's artificial intelligence rulebook next week, Henna Virkkunen, the European Commissioner for Tech Sovereignty, Security and Democracy, told Euronews at the Web Summit tech conference in Lisbon. The AI Act - rules that regulate artificial intelligence tools based on the risks they pose to society - began applying gradually last year. However, the laws faced ongoing criticism by Big Tech companies, as well as the US administration led by Donald Trump, claiming that they stifle innovation. "The next important part [of the AI Act entering into force] will be next August. And there we are really facing challenges because we don't have the [technical] standards yet - and they need to be ready one year before the next phase," Virkkunen said on Tuesday. "Now, we have to look at how we can create legal certainty for our industries, and that's something that we are now considering: how we can support our industries when we don't have the standards in place." Virkkunen added that the amendments to the AI Act - to be presented on 19 November - still need formal approval by the College of Commissioners as a whole. She stopped short of saying how far-reaching those changes will be and whether they will include a formal pause of some of the law's provisions. Virkkunen said that the Commission remains "very committed to the main principles [of the law]". The so-called digital omnibus package, which is an effort by the Commission to cut red tape and make the lives of companies easier by reducing their administrative burden, will also include changes to the EU's data policy and cybersecurity rules. Pressure on changes to the AI Act According to drafts of the plans that have been circulating, the simplification package could introduce a one-year grace period, meaning that national authorities can fine misuse only as of August 2027. Earlier this year, CEOs of more than 40 European companies, including ASML, Philips, Siemens and Mistral AI, asked for a "two-year clock-stop" on the AI Act before key obligations enter into force. The Commission has repeatedly said that it is not giving in to any external pressure concerning the possible delay of certain provisions. Michael O'Flaherty, Human Rights Commissioner at the Council of Europe - Europe's leading human rights organisation - warned about the consequences of the simplification plans while speaking to Euronews at Web Summit. "Let's be very careful not to discard the [laws'] core protective elements," O'Flaherty said. "If there's a way to join up multiple regulations in a more efficient manner, fine, but let's not throw out the baby with the bathwater. Let's not give in to the very heavy tech lobby to make life less onerous for tech and, as a result, more risky for us," he said.
[6]
EU considers delaying AI Act rollout amid US and Big Tech pressure
After adopting the AI Act last year, the European Union is considering easing certain provisions amid mounting pressure from the US and Big Tech. The European Union is considering a partial halt to its landmark artificial intelligence laws in response to pressure from the US government and Big Tech companies. The European Commission plans to ease part of its digital rulebook, including the AI Act that took effect last year, as part of a "simplification package" that is to be decided on Nov. 19, the Financial Times reported on Friday. If approved, the proposed halt could allow generative AI providers currently operating in the market a one-year compliance grace period and delay enforcement of fines for violations of AI transparency rules until August 2027. "When it comes to potentially delaying the implementation of targeted parts of the AI Act, a reflection is still ongoing," the commission's Thomas Regnier told Cointelegraph, adding that the EC is working on the digital omnibus to present it on Nov. 19. The commission proposed the first EU AI law in April 2021, with the mission of establishing a risk-based AI classification system. Passed by the European Parliament and the European Council in 2023, the European AI Act entered into force in August 2024, with provisions expected to be implemented gradually over the next six to 36 months. According to the FT, a bulk of the provisions for high-risk AI systems, which can pose "serious risks" to health, safety or citizens' fundamental rights, are set to come into effect in August 2026. With the draft "simplification" proposal, companies breaching the rules on the highest-risk AI use could reportedly receive a "grace period" of one year. Related: EU mulls SEC-like oversight for stock, crypto exchanges to bolster startup landscape The proposal is still subject to informal discussions within the commission and with EU states and could still change ahead of its adoption on Nov. 19, the report noted. "Various options are being considered, but no formal decision has been taken at this stage," the EC's Regnier told Cointelegraph, adding: "The commission will always remain fully behind the AI Act and its objectives." The EU's potential suspension of parts of the AI Act underscores Brussels' evolving approach to digital regulation amid intensifying global competition from the US and China. After the US explicitly banned central bank digital currency (CBDC) development in early 2025, the European Central Bank accelerated work on the digital euro but later said that digital cash would not launch before 2029.
[7]
Big Tech may win reprieve as EU mulls easing AI rules, document shows
The move by the European Commission came amid intense lobbying by big tech companies and criticism from the U.S. administration against the AI Act adopted last year, which applies risk-based rules to artificial intelligence. Apple, Meta Platforms and other tech giants may win a reprieve from the EU's landmark artificial intelligence rules as regulators consider easing sections of the legislation as part of a drive to simplify a slew of regulations adopted in the last two years. The move by the European Commission came amid intense lobbying by big tech companies and criticism from the U.S. administration against the AI Act adopted last year, which applies risk-based rules to artificial intelligence. EU tech chief Henna Virkkunen will present the so-called Digital Omnibus on November 19 according to a Commission agenda. The document could still be changed before then. "The Commission is proposing targeted simplification measures aimed at ensuring timely, smooth and proportionate implementation," the draft Digital Omnibus document seen by Reuters said. The changes include exempting companies from registering their AI systems in an EU database for high-risk systems if these are only used for narrow or procedural tasks, and the introduction of a one-year grace period where authorities can only levy penalties from August 2, 2027. A requirement for AI system providers to mark their output as AI-generated content to address concerns such as deepfakes and misinformation will be subject to a transitional grace period, the document said. The EU executive has in recent weeks watered down landmark environmental rules after blowback from companies and the U.S. government.
[8]
EU Slows AI Act Rollout While Industry Calls for Clarity Over Leniency | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. According to Reuters, the European Commission is "proposing to pause parts of its landmark artificial intelligence legislation amid pressure from big tech companies and the U.S. government." The Financial Times reported that Brussels is considering a "one-year grace period for high-risk AI compliance" and may delay transparency fines until 2027. PYMNTS noted that policymakers are weighing a "yearlong delay in full implementation" as part of a broader effort to give European industries more time to adapt to the rapid rise of generative and autonomous AI systems. The AI Act, which took effect in August 2024, was designed as the world's first comprehensive legal architecture for artificial intelligence. As PYMNTS reported, the law "brings antitrust scrutiny to the heart of artificial intelligence governance" and seeks to regulate both the safety risks and the competitive dynamics surrounding advanced models and data access. But the speed of innovation has complicated that ambition. PYMNTS earlier noted that OpenAI CEO Sam Altman warned EU lawmakers that the regulation "could limit access to AI" if implemented too rigidly, emphasizing that policymakers must avoid creating barriers at a time when foundational models are advancing rapidly. Investors see the same tension playing out across Europe's early-stage ecosystem. Kid Parchariyanon, founder and managing partner of SeaX Ventures, told PYMNTS that the commission's reconsideration underscores challenges observed inside the startup market. "AI is moving faster than any regulatory framework can and what we consistently see is that startups need clarity more than leniency. When regulations shift or timelines get pushed, it slows down hiring, partnerships and large-scale deployments, not because companies fear compliance, but because uncertainty is expensive." Parchariyanon said shifting deadlines come with risks of their own, especially for young companies trying to plan around uncertain rules. He noted that "longer compliance windows can be helpful if they give early-stage teams room to test and validate their models responsibly," but warned that when those timelines keep changing, "they risk widening the gap between fast evolving AI systems and the rules meant to keep them accountable." As he told PYMNTS, innovation will continue regardless of regulatory timing. "It just moves to parts of the world where founders feel they can build with confidence." For regulatory technology platforms, the EU's adjustment highlights a growing need for modern oversight tools. Stuart Lacey, founder and CEO of Labrynth, spoke with PYMNTS and described the shift as "a strategic recalibration." He added that this move recognizes that AI innovation, particularly in frontier models and enterprise deployment, is outpacing traditional regulatory cadence. Lacey said the EU's shift reflects an important recalibration but added that timing changes on their own are not enough. "Delay alone does not ensure alignment," he said, emphasizing that oversight must "evolve in parallel, with frameworks that are flexible yet enforceable." He explained that a risk-based model supported by human review can dramatically reduce regulatory friction, noting that it can "eliminate the 80% burden of time, delay and cost, leaving the 20% outcome." Ignoring that approach, he cautioned, only increases the distance "between algorithmic power and public accountability." Enterprise governance leaders see similar pressures inside organizations. Anthony Habayeb, co-founder and CEO of Monitaur AI, told PYMNTS, "Extending the AI Act timeline is an acknowledgment that AI is evolving faster than most organizations' ability to govern it. The technology is maturing at an exponential rate, while internal governance programs, documentation practices and enterprise readiness often move at a linear one." Habayeb said the extra time will only be useful if companies treat it as a chance to strengthen their internal guardrails rather than postpone them. He noted that "longer timelines can support more responsible and better tested systems," but only when organizations use the window to build the governance and evidence structures, they will need "regardless of regulatory deadlines." He also emphasized that effective oversight must be built into the entire development cycle. The goal, he said, is to "shift governance left," making oversight continuous, transparent and collaborative so it enables better business outcomes instead of slowing innovation.
[9]
EU eyes tweaks to AI law to heed tech industry 'concerns'
The European Union said Friday it is considering adjustments to its landmark artificial intelligence law, the AI Act, after tech firms and several member states raised concerns. The flagship AI Act entered into force last year but its obligations will kick in over several years -- with pressure mounting on Brussels to delay or review some to ease the burden on the industry. "We are hearing concerns from the industry and from our member states," European Commission digital spokesman Thomas Regnier told reporters. The EU executive is expected on November 19 to unveil a broader package of measures to simplify digital legislation and cut red tape, which Regnier said "would be the appropriate framework to address some of these concerns". "No decision has been taken at this stage, but a reflection is of course ongoing," he said. Asked about pressure from the United States, where industry giants and President Donald Trump's administration have pushed for softer touch EU tech regulations, Regnier said the bloc remained committed to enforcing its rules. "We are fully behind the AI Act, and no pressure from anywhere will impact us," he said. Possible adjustments could include extending implementation deadlines to give companies more time to comply. Henna Virkkunen, the commission's digital chief, had already floated the idea of easing compliance earlier this year, suggesting that some flexibility might be needed to help businesses adapt.
[10]
European Commission mulls pausing parts of AI Act
The European Commission is proposing a pause to parts of its AI laws amid intense pressure from big tech companies and the U.S. government, the Financial Times reported . The regulator is set to soften part of its digital rules, including A pause could ease compliance burdens, give companies more time to adapt, and potentially reduce immediate disruption for providers of AI systems. Delays may increase regulatory uncertainty, but might also limit penalties and allow businesses longer adaptation time, affecting competitive dynamics and market access in Europe. Softening rules and engaging with the U.S. could make the EU more competitive globally, aligning digital policy to avoid trade tensions and to attract leading tech innovation.
[11]
EU considers pausing parts of AI Act amid tech industry pressure - report By Investing.com
Investing.com -- The European Commission is considering a pause on certain portions of its artificial intelligence legislation following pressure from major technology companies and the U.S. government, according to a Financial Times report on Friday. Tech giants including Meta (NASDAQ:META) and Alphabet (NASDAQ:GOOGL) have been lobbying for months against aspects of the EU's AI Act. The Trump administration has also expressed concerns, warning that some measures could lead to trade tensions between the U.S. and Europe. A senior EU official told the Financial Times that European authorities have been "engaging" with the Trump administration regarding potential adjustments to the AI Act and other digital regulations. These changes are part of a broader simplification process scheduled for adoption on November 19. The proposed pause represents a significant development for the EU's regulatory approach to artificial intelligence, which has been positioned as a comprehensive framework for governing AI technologies in Europe. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[12]
EU Floats Tweaks, New Grace Periods in AI Act
The European Union could implement new changes to its Artificial Intelligence Act in a bid to make it easier for companies to obey the law, according to a draft proposal seen by The Wall Street Journal. The package is set to be formally presented by the European Commission on Nov. 19, based on a timetable on the European Commission's website, and could still change before then. The text is part of a broader package of measures that Brussels is proposing to simplify the bloc's digital regulations. Under the planned tweaks, some groups whose generative AI systems are already on the market would be offered a grace period to help them comply with obligations on having their systems watermarked to increase transparency. The text also proposes adding clarity as to how the law fits in with the rest of the EU's digital rulebooks, including the Digital Services Act. "The Commission has committed to a clear, simple, and innovation-friendly implementation of the AI Act," the document said. The EU faces significant pressure to water down or delay the implementation of the AI Act from tech companies and lobby groups, which say the rules add red tape and stifle innovation. The law--along with several of the EU's stack of tech regulation--has also stoked tensions with Washington as U.S. officials say the rules are a burden for American tech groups. The AI Act entered into force in August 2024, but enforcement for some of its rules for different types of AI tools has been staggered. Rules on data governance, transparency, documentation and human oversight for what the EU deems high-risk AI systems are due to come into effect on Aug. 2 next year and 2027. A spokesperson for the commission--the bloc's executive arm tasked with drafting EU legislation--said that officials are still considering whether they should seek to delay implementing some parts of the AI Act to help companies comply with it. "Various options are being considered but no formal decision has been taken at this stage," commission spokesperson Thomas Regnier said. "The commission will always remain fully behind the AI Act and its objectives." A tweak in the draft proposal is offering a one-year grace period to companies that need to retroactively make their generative AI systems readable as AI-generated or manipulated, meaning authorities wouldn't be able to penalize groups for falling short of the rules until August 2027. The draft also includes delaying fines for companies that don't obey certain transparency requirements by one year to August 2027 "to provide sufficient time for adaptation of providers and deployers of AI systems" to comply with the rules. Speaking to reporters on Friday, Regnier said that EU officials have been in continuous contact with counterparts in the U.S., but that Brussels should ultimately have the sovereignty to write its own rules. "It's not for a third country to decide how we legislate," he said.
[13]
EU weighs pausing parts of landmark AI act in face of US and big tech pressure, FT reports
(Reuters) -The European Commission is proposing to pause parts of its landmark artificial intelligence (AI) legislation amid intense pressure from big tech companies and the U.S. government, the Financial Times reported on Friday. The move follows months of urging by tech giants like Meta and Alphabet , and pressure from the Trump administration, which has warned against measures that could provoke trade tensions. The EU has been "engaging" with the Trump administration on adjustments to the AI act and other digital regulations as part of a wider simplification process, which is due to be adopted on November 19, a senior EU official told the FT. Reuters could not immediately verify the report. The EU did not immediately respond to Reuters requests for comment. In July, a spokesperson for the European Commission dismissed calls from some companies and countries for a pause, saying the AI rules would be rolled out according to the legal timeline in the legislation. Talks were continuing within the commission regarding potential delays to "the implementation of targeted parts of the AI act," a spokesperson for the EU told the FT. They added that while various options were being considered, the EU remained "fully behind the AI act and its objectives." The legislation came into force in August 2024 but many of the provisions are staggered to come into effect in coming years. (Reporting by Abu Sultan in Bengaluru; Editing by Kim Coghill and Kate Mayberry)
Share
Share
Copy Link
The European Commission is weighing amendments to its landmark AI Act, potentially introducing grace periods and delayed enforcement in response to intense lobbying from US tech giants and the Trump administration. The proposed changes aim to balance regulatory compliance with industry competitiveness concerns.
The European Commission is preparing to announce significant amendments to its landmark Artificial Intelligence Act, following sustained pressure from the Trump administration and major technology companies. According to draft proposals reviewed by the Financial Times, the Commission will present a "simplification package" on November 19 that could substantially alter the implementation timeline of the world's most comprehensive AI regulation
1
.
Source: Cointelegraph
The proposed changes include introducing a one-year "grace period" for companies operating high-risk AI systems that breach the rules, allowing them to adapt their practices "without disrupting the market"
2
. Additionally, the Commission is considering delaying the imposition of fines for AI transparency rule violations until August 2027, providing companies with additional time to implement compliance obligations.The amendments come amid fierce lobbying from both American tech giants and European companies. Meta, the parent company of Facebook and Instagram, has warned that the EU's regulatory approach risks cutting Europe off from accessing cutting-edge AI services
1
. The company's chief global affairs officer, Joel Kaplan, argued that Europe's code of practice for AI models introduces "legal uncertainties" and measures that exceed the scope of the AI Act2
.Domestic pressure has also intensified, with 46 European companies, including Airbus, Lufthansa, and Mercedes-Benz, signing an open letter calling for a two-year pause on the Act's implementation. These companies argue that such a delay would signal Europe's commitment to its "simplification and competitiveness agenda"
2
.
Source: Seeking Alpha
The Trump administration has explicitly threatened to impose tariffs on countries with tech regulations deemed harmful to American technology companies
2
. EU officials acknowledge "engaging" with the Trump administration on potential adjustments to the AI Act as part of broader simplification efforts, though they maintain that external pressure will not dictate European legislation1
.Fears of provoking retaliatory measures from the White House, including potential disruptions to intelligence sharing or weapon supplies for Ukraine, have influenced Brussels' approach to digital regulation enforcement. The geopolitical dimension adds complexity to what began as a primarily regulatory and economic debate
1
.Related Stories
Tech Commissioner Henna Virkkunen highlighted practical challenges facing the AI Act's implementation, particularly the absence of technical standards required one year before the next phase takes effect in August 2026. "We don't have the [technical] standards yet - and they need to be ready one year before the next phase," Virkkunen explained at the Web Summit conference
5
.
Source: Euronews
The Commission is exploring ways to create "legal certainty" for industries operating without finalized technical standards, suggesting that practical implementation concerns complement political and economic pressures driving the proposed amendments
5
.The AI Act, which entered into force in August 2024, represents the first comprehensive global legislation regulating artificial intelligence. Its provisions apply gradually, with most obligations for high-risk AI systems scheduled to take effect in August 2026. The legislation bans certain AI applications, including social scoring and real-time facial recognition, while imposing strict requirements on AI systems deemed high-risk in sectors such as healthcare, policing, and employment
4
.Human rights advocates have expressed concern about the proposed simplifications. Michael O'Flaherty of the Council of Europe warned against discarding "core protective elements" and cautioned against yielding to "heavy tech lobby" pressure that could make technology "more risky for us"
5
.The proposed amendments require approval from both EU member states and the European Parliament before implementation, ensuring continued debate over balancing innovation incentives with regulatory protection.
Summarized by
Navi
[3]
26 Jun 2025•Policy and Regulation

03 Jul 2025•Policy and Regulation

01 Aug 2025•Policy and Regulation

1
Business and Economy

2
Technology

3
Policy and Regulation
