16 Sources
16 Sources
[1]
Europe's Ready to Dilute Its Tough Rules on Privacy. You Can Blame AI for That
Katie is a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand. Europe has long been a global leader when it comes to regulating Big Tech, but is now considering making changes that would weaken its landmark privacy legislation, the General Data Protection Regulation or GDPR. In a move designed to unlock access to data essential for AI across the region, the European Commission on Wednesday published proposals for a "digital simplification strategy." These included rolling back some GDPR protections, including simplifying cookie permission pop-ups and delaying the introduction of AI regulation. Europe introduced the GDPR in 2018. It was designed to give European citizens more knowledge, power and control over who was able to access and use their personal data. The regulation went on to inform the development of similar laws elsewhere in the world, including California's privacy legislation. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. The EU was ahead of the curve when it came to regulating tech, but at the same time no serious competitors have emerged from within Europe to rival the AI companies coming out of the US and China. The bloc has also been under pressure from US tech companies and the Trump administration to lessen the regulatory burdens they face in the region. In the US, the White House has been pushing hard for unfettered development of artificial intelligence technologies. Over the summer it unveiled America's AI Action Plan, which among other things called for the removal of red tape and "onerous regulation." In a press release, the executive vice president of the Commission, Henna Virkkunen, called the proposed changes to the GDPR "a face-lift with targeted amendments ... that reflect how technology has evolved." The aim of the measures, she added, is to encourage AI development. An "attack" on European rights? As the Commission noted in its proposal on Wednesday, member states consider the GDPR to be an effective and balanced piece of legislation. It's framing the proposed changes as being a way to "harmonise, clarify and simplify" the application of the regulation. European privacy campaigners see it differently. "This is the biggest attack on European's digital rights in years," said Austrian privacy campaigner Max Schrems, who is best known for taking legal action against Meta (aka Facebook) over privacy violations. "When the Commission states that it 'maintains the highest standards', it clearly is incorrect. It proposes to undermine these standards." Some campaigners are worried that the proposed changes to GDPR are a sign that the EU is kowtowing to Big Tech. It's unlikely that the changes would allow Europe to begin challenging the dominance of the US and China when it comes to AI, said Johnny Ryan, director of the Enforce unit at the Irish Council for Civil Liberties. "Today's proposal from European Commission to revise the GDPR will entrench the dominance of US and Chinese digital giants, and harm European startups and [small to medium businesses]," he said. "Europe's problem is not that it has too many rules for data and AI, but that it hypes those rules and then neglects to enforce them." According to Schrems, the proposed reform of the GDPR seems primarily designed to remove obstacles that could prevent AI companies from using personal data for AI. "Artificial intelligence may be one of the most impactful and dangerous technologies for our democracy and society," he said. "Nevertheless, the narrative of an 'AI race' has led politicians to even throw protections out of the window that should have exactly protected us from having all our data go into a big opaque algorithm."
[2]
Europe is scaling back its landmark privacy and AI laws
After years of staring down the world's biggest tech companies and setting the bar for tough regulation worldwide, Europe has blinked. Under intense pressure from industry and the US government, Brussels is stripping protections from its flagship General Data Protection Regulation (GDPR) -- including simplifying its infamous cookie permission pop-ups -- and relaxing or delaying landmark AI rules in an effort to cut red tape and revive sluggish economic growth. The proposed overhaul won't land quietly in Brussels, and if the development of the GDPR and AI Act are anything to go by, a political and lobbying firestorm is on its way. The GDPR is a cornerstone of Europe's tech strategy and as close to sacred as a policy can be. Leaked drafts have already provoked outrage among civil rights groups and politicians, who have accused the Commission of weakening fundamental safeguards and bowing to pressure from Big Tech. The decision follows months of intense pressure from Big Tech and Donald Trump -- as well as high-profile internal figures like ex-Italian prime minister and former head of the European Central Bank Mario Draghi -- urging the bloc to weaken burdensome tech regulation. With very few exceptions, Europe doesn't have any credible competitors in the global AI race, which is dominated by US and Chinese companies like DeepSeek, Google, and OpenAI.
[3]
How the EU botched its attempt to regulate AI
The birth of the Artificial Intelligence Act was a drawn-out, exasperating affair. In December 2023, European officials laboured for 36 hours to agree on the legislation considered to be a world first. "I saw my colleagues at their wits' end," says Laura Caroli, who took part as an assistant to one of the European parliament's key negotiators, and is a former senior fellow at think-tank CSIS. The stakes were high. Success would mean Brussels leading the way in crafting comprehensive rules for a technology forecast to transform the global economy. "There was massive international pressure," says Caroli. Finally, at about 1am on December 9, the negotiators had a deal. Then commissioner Thierry Breton swapped his gilet vest for a suit jacket, the exhausted lawmakers popped open bottles of sparkling wine and then headed to face the press. Nearly two years on, and the mood in Brussels is decidedly less triumphant. The AI Act was designed to use Europe's economic heft to force companies to create "trustworthy AI" for its 450mn consumers through a risk-based approach: banning the most harmful uses, controlling high-risk systems and lightly regulating low-risk ones. But the law's complexity, its rushed inclusion of AI models such as ChatGPT and its chaotic implementation have turned the AI Act from a symbol of European leadership into a case study for those who say the continent puts regulation ahead of innovation. On Wednesday, the European Commission postponed a key part of its landmark AI rules -- the first formal acknowledgment that Brussels is struggling with its own legislation. Now, officials and companies are grappling with two questions: did the EU go too far, too fast? And how can it fix the regulation to avoid the European economy missing the AI boat just as it struggles to compete with its geopolitical rivals? Staying in the race for global dominance of AI is key for Europe. Its failure to create or scale tech giants has widened the productivity gap with the US. After failing to lead on other technologies, Brussels wants the EU to be an "AI continent" but the bloc is struggling to develop Europe's AI ecosystem and to accelerate investment in the technology on a par with global superpowers such as the US and China. Whether the EU succeeds in fixing its rules matters far beyond Europe's borders. The AI Act is the world's first attempt to regulate a technology with the potential not only to disrupt every sector of the global economy but also to spiral out of control, with unpredictable consequences. If Brussels waters down its legislation beyond relevance, the question becomes who else -- if anyone -- may lay down guardrails. Ironically, Europe's push to regulate AI was originally intended to help the new technology flourish. Regulating the market into existence became one of the priorities of the then new commission of Ursula von der Leyen in 2019. She announced she would introduce the world's first law for AI within 100 days of her new commission. At the time, there were concerns that public mistrust in AI products would lead to a slowdown in the development of the technology in Europe. While negotiating the act, European lawmakers were heavily influenced by AI news coming out of the US, where facial recognition tools had led to false arrests, and various credit-scoring algorithms had led to biased outcomes. To control those risks, Brussels wanted to combine its traditional product safety law with fundamental rights protection, such as preventing mass surveillance. At the time, there was a global consensus that AI needed rules, says Caroli. That was the case even for big technology companies; in 2020, Mark Zuckerberg visited Brussels to meet top European commissioners, calling for governments to come up with new rules for online content. There were also several AI governance initiatives that influenced each other, including AI principles from the OECD and the G7, and an executive order from the first Trump administration offering "regulatory principles" to develop AI according to American values. "The more we create trustworthy AI, the more Europe will be well positioned to adopt AI: that was essentially the goal," says Gabriele Mazzini, who was the lead author of the AI Act at the European Commission. But when OpenAI's ChatGPT burst on to the scene in late 2022, the AI Act -- with negotiations already well under way -- was hastily rewritten to include rules for general-purpose AI models, systems that can generate text, images or code and can be deployed for many different purposes. The original draft created by the EU's executive arm had no reference at all to large language models, which until then had been viewed as largely experimental. The AI Act "is not a ChatGPT regulation", says Daniel Leufer, senior policy analyst at digital rights group Access Now. "It was never designed to be. And there was a sort of complex process of shoehorning that started happening after ChatGPT was released." On top of that, an open letter from the Future of Life Institute in March 2023, signed by the likes of Elon Musk, Apple co-founder Steve Wozniak, prominent computer scientists Yoshua Bengio and Stuart Russell and writer Yuval Noah Harari, called for a six-month pause in the development of powerful AI systems until proper safeguards were in place. Mazzini says it "skewed completely the political conversation". Instead of just regulating how companies and the public sector use AI tools, lawmakers now felt like they had to also regulate how the AI systems themselves were built -- and ensure that legislation covered the biggest models. "The parliament was completely unanimous in wanting to regulate at least ChatGPT," says Caroli. "What are we even doing here if we exit this [negotiating] room without regulating ChatGPT? Basically we will be completely ineffective." Mazzini also pointed to a "regulatory hype in Brussels that was detrimental to the good outcome of the negotiations and pushed for even more urgency to close the file". The European parliamentary elections of June 2024 were quickly approaching and negotiators did not want to miss the political window of opportunity. That political deadline, external "interference" and the discussion around existential threats by artificial intelligence "created a mix where common sense seemed to have been lost", Mazzini says. For Patrick Van Eecke, co-chair of law firm Cooley's global cyber, data and privacy practice, the commission made a "fundamental error" at the start of the regulatory process by treating AI as a product instead of as a process. Under regulations governing static products, such as an elevator, companies have to meet safety requirements to put it on the market. But while an elevator will still be doing the exact same thing in 20 years, Van Eecke says, AI is a dynamic process that is constantly evolving. "This makes it impossible to apply hard-coded requirements," he says. When the AI Act officially entered into force in August 2024, many considered it half-baked. The act required a wide range of additional legislation to set up codes of practice, guidelines and standards so companies knew how to implement the law. Many provisions only came into effect gradually, sometimes with delays and uncertainty about the exact timelines. It has muddied the water for companies, says Elisabetta Righini, a lawyer at Sidley Austin who advises companies on the AI Act. "It takes time even for large digital companies to prepare for compliance with new rules," she says. "The act left a number of details to be determined through implementing regulations and guidelines. Uncertainty is never great for businesses." Even if they can figure out how to comply, companies say the rules present a huge logistical burden as they assess the risks of their AI systems and have to put mechanisms in place to meet the transparency and accountability standards. The resulting costs create the opposite of a level playing field in the sector, points out Van Eecke, the lawyer. "Larger, more mature companies will find it easier to be compliant and have even more competitive advantage than start-ups." AI start-ups warn that the present situation creates an environment where large companies thrive while scale-ups and start-ups can only survive. Alexandru Voica, head of corporate affairs at London-based AI start-up Synthesia, says: "A lot of these scale-ups will never be able to reach the same sort of size and impact as American or Chinese companies, because they are buried under all of this regulation." But large companies are equally frustrated at the act. Big Tech companies such as Meta have argued that the bloc is cutting itself off from accessing cutting-edge services, because companies delay or avoid rolling out certain AI features out of fear of non-compliance with the AI Act. In July, large companies including Airbus, BNP Paribas, Mercedes-Benz, and TotalEnergies, urged the European Commission to halt the AI Act's timeline for two years to simplify the rules and allow time for companies to implement them. Even Mazzini, an architect of the act, now concedes that it is too broad, complex and "doesn't provide the legal certainty that is needed". The public outcry made the European Commission change its tune. The backlash against the AI Act coincided with a change in priorities in Brussels. Ever since von der Leyen started her second mandate last year, boosting competitiveness is the thread that runs through her key policy proposals. At the same time, the global conversation about AI shifted dramatically. The fear-mongering about the existential risks of AI turned into a race for global dominance in the fast-developing technology -- a race now defined by the power tussle between Washington and Beijing. For Henna Virkkunen, who took over as European commissioner for digital and frontier technologies last December, the focus has shifted from regulation to innovation and attracting investment, ensuring Europe does not fall behind on AI. In February, Brussels withdrew planned rules to ensure that people harmed by AI systems would enjoy more protection, the so-called AI liability directive, as part of a broader deregulatory push. This week, that shifted rhetoric culminated in a proposed delay of implementation of the high-risk AI rules -- a key part of the AI Act -- by at least a year. The delay is controversial and still needs sign-off from the European parliament and EU countries, which are divided about the implementation of the flagship AI law. The one-year delay is also not an open-ended pause, as former European Central Bank president Mario Draghi has suggested. Still, the proponents of the AI Act see the move as a setback. Kim Van Sparrentak, a Green European lawmaker, says the bloc should still be proud it was the first with rules for safe AI and that this continues to be a competitive advantage. Jurisdictions such as Japan, Brazil and California have imposed similar transparency rules on AI models, for example. "The commission should stand by it and work day and night to make it a success," says Van Sparrentak. "This means following through and leading the way for businesses and people in Europe, not going back on our promises as soon as Trump and Big Tech complain." For others, such as Bengio, a Turing Award winner who is considered one of the "godfathers" of AI, criticism around the AI Act is simply "propaganda" by companies that dislike the legislation. He argues that the AI Act merely formalises safety protocols that AI labs are already doing and brings more transparency. "What it's going to do is level the playing field so that all the companies will rise to the level of the best players," he tells the FT. But it is clear Brussels feels it still has work to do to find the right balance between regulation and innovation on AI, so that it can fulfil its dual goals of making European companies global AI frontrunners while setting the rules for the rest of the world. Virkkunen, the EU's tech chief, said on Wednesday that the bloc continued to stand behind its high standards "because EU regulation is a trust mark for businesses. We are the one place on the planet that has framed the rules of the game in this way, to protect our values and fundamental rights." At the same time, she acknowledged regulation alone was not enough. "We must also move on from rulemaking to innovation building. Our rules should not be a burden, but an added value." But critics say that the delay to the act's rollout announced on Wednesday will only complicate things further. Mazzini has been urging Brussels for a complete rethink of the act. "There is time to be courageous and it's time to say, 'OK, we need to rethink this completely.' I think this will be the wisest decision to make." For others, it is too soon to go back to the drawing board when the law is still in its infancy. "Lots of people are jumping to conclusions," says Risto Uuk, head of EU policy and research at the Future of Life Institute, which has lobbied for stricter protections for high-risk AI systems. "And that's probably premature because we haven't really had it. We're [only] just implementing it." Either way, Big Tech has successfully injected a narrative in Brussels that the two goals -- regulating while innovating and adopting AI -- are contradictory. But that does not necessarily have to be the case, says Anu Bradford, a Columbia University law professor who coined the term "Brussels effect" back in 2012. The discussion about the AI Act is a "sideshow" to the main challenges facing European competitiveness, she says, which include its fragmented single market, attracting the right talent and finding the necessary funding for innovation. "The solution for the European competitiveness deficit in innovation is not simplification. We are not going to be an AI power just by scrapping the AI Act," Bradford says. "There are more fundamental battles that we need to be fighting for, pushing those other reforms forward instead of just arguing over how much we're scaling back legislation." If nothing else, the law has at least put Europe at the centre of the debate over how to regulate the technology, says Dan Nechita, who helped negotiate the act as an assistant to one of the AI Act's key negotiators in the European parliament and is now at Vanguard Europe, a consultancy. "It gave us a seat at the table. It gave us the leverage and the visibility and the weight that we unfortunately back then didn't have in artificial intelligence," Nechita says.
[4]
EU to delay 'high risk' AI rules until 2027 after Big Tech pushback
BRUSSELS/STOCKHOLM, Nov 19 (Reuters) - The European Commission proposed on Wednesday streamlining and easing a slew of tech regulations, including delaying some provisions of its AI Act, in an attempt to cut red tape, head off criticism from Big Tech and boost Europe's competitiveness. The move by the EU comes after it watered down some environmental laws after blowback from business and the U.S. government. Europe's tech rules have faced similar opposition, though the Commission has said the rules will remain robust. Sign up here. "Simplification is not deregulation. Simplification means that we are taking a critical look at our regulatory landscape," a Commission official said during a briefing. In a 'Digital Omnibus', which will still face debate and votes from European countries, the Commission proposed to delay the EU's stricter rules on the use of AI in a range of areas seen as more high risk, to December 2027 from August 2026. That includes AI use in biometric identification, road traffic applications, utilities supply, job applications and exams, health services, creditworthiness and law enforcement. Consent for pop-up 'cookies' would also be simplified. The Digital Omnibus or simplification package covers the AI Act which became law last year, the landmark privacy legislation known as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act, among others. Proposed changes to the GDPR would also allow Alphabet's Google (GOOGL.O), opens new tab, Meta (META.O), opens new tab, OpenAI and other tech companies to use Europeans' personal data to train their AI models. Reporting by Supantha Mukherjee in Stockholm and Jan Strupczewski and Foo Yun Chee in Brussels Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
European policymakers want to ease AI and privacy laws
European policymakers have proposed sweeping changes to the way the EU regulates the tech industry. In just the last few months, the likes of Meta and Google have questioned strict EU policies relating to privacy and AI expansion, but if the European Commission's new package of proposals are passed, a number of big tech roadblocks will be removed. Or at least lifted up a bit. Changes to rules around AI, cybersecurity and data will, according to policymakers, generate growth for European businesses, while "promoting Europe's highest standards of fundamental rights, data protection, safety and fairness." Among the proposals are amendments to the AI Act that Google has recently expressed concerns about, which would allow AI companies to access shared personal data for training models. It also wants to simplify paperwork for smaller companies, and to make AI literacy a requirement for member states. AI oversight would also be centralized into the AI Office where general-purpose AI models are being used, a move intended to "reduce governance fragmentation." In addition, strict rules around the use of AI in areas deemed to be high-risk, which were expected to come in next summer, could be delayed until the Commission confirms that "the needed standards and support tools" are available to affected companies. The infamous (and admittedly very annoying) cookie banners that are foundational to the EU's General Data Protection Regulation (GDPR) will also be rethought under the Commission's proposals. If approved, people would see these banners pop up with less regularity, give their consent with one click, and save their cookie preferences so they presumably could be automatically applied within a browser. The European Commission's "digital omnibus" now goes to the European Parliament for approval, where it could face serious opposition. While the proposals are likely to be welcomed by the rapidly-growing AI industry, sceptics could argue that watered down privacy and AI legislation is evidence of Europe bowing to pressure from big tech and Donald Trump, who has publicly criticized the EU's digital regulation. This would represent a marked turnaround from the EU's long-standing reputation as the tech industry's most stubborn adversary. Back in September, it rejected calls from Apple to repeal its Digital Markets Act (DMA), a legal framework that Apple has repeatedly been accused of violating by the EU. In the summer, Meta refused to sign the EU's AI Code of Practice, with its global affairs officer, Joel Kaplan, calling the code an "over-reach."
[6]
The EU wants to make some major changes to GDPR - could big tech be getting its way at last?
Other changes aim to simplify processes and remove outdated/unnecessary steps The European Union has revealed plans to simplify all of its digital legislation in response to concerns regulatory complexity might be hampering innovation and competitiveness, particularly within the realms of AI. Portions of the GDPR, ePrivacy, Data Governance Act, Data Act and Free Flow of Non-Personal Data Regulation will be simplified, as will the AI Act. Ultimately, it aims to make it easier for companies to process anonymized/pseudonymized data by allowing the use of certain personal data for AI training. GDPR will introduce clearer definitions and mechanisms to distinguish anonymized data that's suitable for training. Separately from AI, the EU also wants to address so-called cookie banner fatigue. Non-risk cookies will no longer require consent pop-ups, and users will be able to manage consent centrally at the browser level (which websites will then need to respect). Furthering simplification, several separate data-related laws will be merged into the Data Act. "The accumulation of rules has sometimes had an adverse effect on competitiveness," the European Commission explained, hence the pretty sizeable changes set out by Commission President von der Leyen for her 2024-2029 term. Moreover, the huge shift also aims to deal with outdated laws that no longer fit the evolving digital landscape, which is a totally different one to when many laws were initially introduced. At the moment, though, the changes are only in consultation phase and are not confirmed. The Commission also stressed the importance of consulting with SMEs, which make up a considerable portion of the European business landscape. "Provided the proposal enters into force by early 2027, the Digital Omnibus could amount to at least EUR 5 billion in administrative cost savings for businesses by the end of the Commission mandate in 2029, as well as a further EUR 1 billion for public authorities," the Commission wrote.
[7]
European Commission accused of 'massive rollback' of digital protections
Proposed changes to AI Act would make it easier for tech firms to use personal data to train models without consent The European Commission has been accused of "a massive rollback" of the EU's digital rules after announcing proposals to delay central parts of the Artificial Intelligence Act and water down its landmark data protection regulation. If agreed, the changes would make it easier for tech firms to use personal data to train AI models without asking for consent, and try to end "cookie banner fatigue" by reducing the number times internet users have to give their permission to being tracked on the internet. The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies. Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules. The plans were part of the commission's "digital omnibus", which tries to streamline tech rules including GDPR, the AI Act, the ePrivacy directive and the Data Act. After a long period of rule-making, the EU agenda has shifted since the former Italian prime minister Mario Draghi warned in a report last autumn that Europe had fallen behind the US and China in innovation and was weak in the emerging technologies that would drive future growth, such as AI. The EU has also come under heavy pressure from the Trump administration to rein in digital laws. The EU's economy commissioner, Valdis Dombrovskis, said: "Europe has not so far reaped the full benefits of the digital revolution and we cannot afford to continue to pay the price for failing to keep up with a changing world." He added that the measures would save business and consumers €5bn in administrative costs by 2029. They are part of the bloc's wider drive for "simplification", with plans under way to scale back regulation on the environment, company reporting on supply chains and agriculture. Like these other proposals, the digital omnibus will need to be approved by EU minsters and the European parliament. European Digital Rights (EDRi), a pan-European network of NGOs, described the plans as "a major rollback of EU digital protections" that risked dismantling "the very foundations of human rights and tech policy in the EU". In particular, it said that changes to GDPR would allow "the unchecked use of people's most intimate data for training AI systems" and that a wide range of exemptions proposed to online privacy rules would mean businesses would be able to read data on phones and browsers without asking. European business groups welcomed the proposals but said they did not go far enough. A representative from the Computer and Communications Industry Association, whose members include Amazon, Apple, Google and Meta, said: "Efforts to simplify digital and tech rules cannot stop here." The CCIA urged "a more ambitious, all-encompassing review of the EU's entire digital rulebook". Critics of the shake-up included the EU's former commissioner for enterprise, Thierry Breton, who wrote in the Guardian that Europe should resist attempts to unravel its digital rulebook "under the pretext of simplification or remedying an alleged 'anti-innovation' bias. No one is fooled over the transatlantic origin of these attempts." The commission's vice-president in charge of tech policy, Henna Virkkunen, pushed back against suggestions that Brussels was responding to US pressure. "We want to support our start ups, our SMEs to scale up their businesses to innovate in the EU," she said. "We are not so much here looking at big industries or the very big tech companies ... They have also the resources to comply with different rules." She also rejected claims that the AI Act was being watered down, saying that action was needed to prevent European start-ups from moving to other jurisdictions. Michael McGrath, the EU commissioner responsible for the GDPR, said most of the feedback on the proposals had come from companies in the EU. He said the commission was introducing "targeted amendments to GDPR" that clarified existing concepts and principles while "ensuring a high level of data protection across the EU". EU officials said users would remain in control of their data on the internet, but new rules on cookies - the internet files that are stored on a user's device so a website can remember them - would make life simpler by ensuring one-click consent. "I think we can all agree we have spent too much of our time accepting or rejecting cookies," Virkkunen said.
[8]
EU moves to delay 'high-risk' AI rules amid pressure to boost innovation
The European Commission on Wednesday proposed easing key AI and data privacy rules to help Europe's tech sector compete globally, despite criticism that the bloc is retreating from its role as a digital watchdog. The EU executive proposed rolling back key AI and data privacy rules on Wednesday as part of a push to slash red tape and help Europe's high-tech sector catch up with global rivals. The landmark EU tech rules have faced powerful pushback from the US administration under President Donald Trump - but also from businesses and governments at home complaining they risk hampering growth. Read moreTrump calls for single federal standard to govern AI Brussels denies bowing to outside pressure, but it has vowed to make businesses' lives easier in the 27-nation bloc - and on Wednesday it unveiled proposals to loosen both its rules on artificial intelligence and data privacy. Those include: * giving companies more leeway to access datasets to train AI models like personal data when it is "for legitimate interests" * giving companies extra time - up to 16 months - to apply 'high-risk' rules on AI * in a plan many Europeans will welcome, Brussels wants to reduce the number of cookie banner pop-ups users see, which it says can be done without putting privacy at risk. "We have talent, infrastructure, a large internal single market. But our companies, especially our start-ups and small businesses, are often held back by layers of rigid rules," EU tech chief Henna Virkkunen said in a statement. After cheering the so-called "Brussels effect" whereby EU laws were seen as influencing jurisdictions around the world, European lawmakers and rights defenders increasingly fear the EU is withdrawing from its role as Big Tech's watchdog. Campaigners from different groups including People vs Big Tech drove across Brussels on Wednesday with large billboards calling on EU chief Ursula von der Leyen to stand up to Trump and the tech sector, and defend the bloc's digital rules. Striking a 'balance' The commission says the plans will help European businesses catch up with American and Chinese rivals - and reduce dependence on foreign tech giants. For many EU states, the concern is that the focus on regulation has come at the expense of innovation - although Brussels insists it remains committed to protecting European citizens' rights. But experts say the EU lags behind the bigger economies for several reasons including its fragmented market and limited access to the financing needed to scale up. The EU raced to pass its sweeping AI law that entered into force last year, but dozens of Europe's biggest companies - including Airbus, Lufthansa and Mercedes-Benz - called for a pause on the parts they said risked stifling innovation. Watch moreEU orders AI companies to clean up their act, stop using pirated data Brussels met them part of the way by agreeing to delay applying provisions on "high-risk" AI - such as models that could endanger safety, health or citizens' fundamental rights. With the proposed change on cookie banners, an EU official said the bloc wanted to address "fatigue" at the pop-ups seeking users' consent for tracking on websites, and "reduce the number of times" the windows appear. The commission wants users to be able to indicate their consent with one click, and save cookie preferences through settings on browsers and operating systems. Brussels has insisted European users' data privacy will be protected. "It is essential that the European Union acts to deliver on simplification and competitiveness while also maintaining a high level of protection for the fundamental rights of individuals -- and this is precisely the balance this package strikes," EU justice commissioner Michael McGrath said.
[9]
France, Germany support simplification push for digital rules
The Commission, with French-German backing, plans to ease AI and data rules to ease the burdens on European companies. Members of Parliament and NGOs fear it will open "pandora's box" of negative ramifications. As the European Commission prepares to simplify digital rules with a new omnibus plan due to presented on Wednesday, Berlin pulled out the red carpet in a glitzy summit dedicated to digital sovereignty. "I'm very curious about what tomorrow will bring. Hopefully it's a big bold step in the right direction," said Karsten Wildberger, German Ministry for Digital Transformation on a panel at the Berlin summit. The European Commission has been working for months on a new proposal to "simplify" rules, reduce administrative for companies, in particular SMEs which struggle to comply with complex EU rules, to keep talent in Europe and stay competitiveness in a global race. The Commission, supported by France and Germany, hopes that the digital simplification plan that will be announced Wednesday after months of negotiations will "save billions of euros and boost innovation". Still, the push text has been met with skepticism among the progressive forces of the European Parliament and civil society citing a dismantling of protections. The text proposes to modify the rules of data protection and the recently adopted AI Act. According to a draft version, the rules for "high-risk AI systems", AI technologies used for sensitive purposes such as analysing CVs, evaluating school exams or loan applications, which were originally scheduled to take effect in August 2026, are now expected to be delayed until December 2027. The European Commission cites difficulties in establishing the necessary standards as the reason for the postponement. Under the original text, the classification of system as "high-risk" would have been evaluated by a national authority. The leaked draft, which is still to be officially approved, suggests that this provision would now be replaced by a simple self-assessment, potentially weakening the safeguards intended to ensure compliance with the rules. Anne Le Hénanff, French Minister for AI and Digital Affairs said during the Berlin summit that she supports the postponement. "The AI Act now comes with too many uncertainties. These uncertainties are slowing our own ability to innovate," she said, before adding "the United States and China are leading the way in the AI race. We simply cannot afford to hinder our companies' ability to innovate." A position German minister, Karsten Wildberger said that his country also supports a delay, adding that " it's important to continue this conversation because the world is moving so fast that we have to continuously rework the rules." He prefers a "learn-by-mistakes" approach: "We do not rule out from ex ante all the risks. Let's first build the products, and then take very seriously how these products work - that they are safe, that we have the right processes in place." Resistance from the Parliament to open damaging Pandora's box Still, members of the European Parliament fear that the Commission's proposal will open "pandora's box", increase risks for consumers and ultimately benefit US Big Tech, MEPs consulted by Euronews who did not wish to be named as the Commission's plan is not yet official and talks ongoing. They suggested big tech companies have been dragging their feet to avoid complying with the current rules and paid more than ever in lobbying. Members of the European Parliament from political groups ranging from the traditional majority, which includes The Left to centrist-liberal Renew, have already signalled their intention to vote against the proposal. Other provisions include exemptions of reporting obligations for smaller companies, or the delay in the labelling of AI-generated contents for 2027. Recently, deep fakes created with AI disturbed the Irish presidential elections with viral AI video depicting a fake version the presidential candidate Catherine Connolly saying she was withdrawing from the race. Another part of the omnibus focuses on simplifying the General Data Protection Regulation (GDPR). It aims to make it easier to access data for training AI models, reduce the number of cookies displayed to users, and harmonise GDPR implementation across all member states. At present, national authorities interpret data protection obligations differently, which can lead to inconsistencies. Online rights advocates believe that the omnibus overreaches its mandate to the point of undermining fundamental rights. A letter signed by three major NGOs and addressed to Commission Virkkunen reads "the legislative changes now contemplated go far beyond mere simplification. They would deregulate core elements of the GDPR, the e-Privacy framework and AI Act, significantly reducing established protections." On Wednesday, the Commission will also launch a "digital fitness check" to examine how effective existing digital rules, such as the Digital Services Act and the Digital Markets Act, are, and examine areas where overlapping may be happening. This could prompt another wave of simplification from the part of the Commission. "We are going to have a deeper dive into our regulation also, and after that we will also propose the next simplification effort," said Commissioner Henna Virkkunen.
[10]
EU to revise GDPR, AI Act as part of regulatory simplification push - SiliconANGLE
EU to revise GDPR, AI Act as part of regulatory simplification push The European Union has proposed a set of regulatory changes designed to make it easier for companies to comply with the bloc's AI Act and GDPR privacy law. The European Commission, the EU's executive arm, announced the initiative today. The legislative updates are expected to save up to €5 billion in administrative costs for businesses by 2029. The move follows calls by U.S. tech giants to simplify the EU's data regulations. "Our companies, especially our start-ups and small businesses, are often held back by layers of rigid rules," said European Commission executive vice president Henna Virkkunen. "By cutting red tape, simplifying EU laws, opening access to data and introducing a common European Business Wallet we are giving space for innovation to happen and to be marketed in Europe." The first batch of proposals focuses on the AI Act, a law the EU implemented last year to regulate artificial intelligence systems. Some of the rules set forth in the legislation apply only to AI systems that regulators deem to be high-risk. Under the proposed changes, the EU would delay the implementation of those rules from August 2026 to December 2027. The AI Act includes certain regulatory exceptions for small and medium-sized businesses, as well as small-cap publicly traded companies. The EU is proposing to expand those exceptions. The change would encompass, among others, clauses that reduce the amount of technical documentation eligible companies must create about AI systems. The European Commission estimates that the regulatory update could save at least €225 million annually. Officials are also pushing for certain other modifications to the AI Act. Notably, the EU hopes to expand companies' access to regulatory sandboxes. Those are virtual environments in which a tech firm can test a new AI system before broadly releasing it. The second set of changes the EU proposed today focuses on GDPR, the bloc's privacy law. Officials are proposing updates that would reduce the frequency at which cookie consent banners show up on websites. Under the revised version of GDPR, consumers would gain the ability to accept cookies with one click. Furthermore, the changes would make it possible to centrally save cookie preferences in a browser or an operating system. That arrangement would reduce the need to individually configure privacy settings for each website. The EU is also seeking to modify a piece of legislation called the Data Act. The law, which went into effect last year, requires cloud providers to provide a simple way for consumers to move their account data to competing services. The Data Act also sets forth certain other requirements, including that industrial equipment manufacturers enable customers to access data generated by their hardware. According to the European Commission, the proposed changes would expand AI developers' access to training datasets. Additionally, officials plan to release resources that will make it easier to comply with the AI Data Act. Those resources will include standardized contractual terms meant to make it easier for cloud providers to draft customer agreements.
[11]
EU claims cuts to GDPR, AI rules will help business growth
From left: Henna Virkkunen, Valdis Dombrovskis and Michael McGrath. Image: European Union, 2025 The EU wants to consolidate all rules around data into two major laws, the Data Act and the GDPR. The EU wants to cut down and streamline rules around AI, cybersecurity and data in a bid to better compete against the US tech industry and promote scale-up growth at home. Alongside simplifying rules, reporting mechanisms and paperwork requirements, the Commission is also loosening restrictions by making more data available for AI use cases. The new package, the EU said, is estimated to save up to €5bn in administrative costs by 2029. Plus, businesses would save a further €150bn annually with the newly proposed single digital identity for simplifying paperwork, it added. What's being cut down? The digital omnibus is proposing consolidating all rules around data into two major laws - the Data Act and the General Data Protection Regulation (GDPR). While the AI Act and the various laws around cybersecurity are seeing amendments aimed at simplifying administrative burden. Among the changes, the Commission is proposing to start applying rules that govern high-risk AI systems when the necessary standards and tools are made available for businesses. It is adjusting the timeline to start applying these rules to a maximum of 16 months. While the new Data Union Strategy expands access to high-quality data for AI. In addition, the EU also wants to broaden compliance measures so more innovators can use regulatory soundboxes from 2028, as well as include more real-world testing to fine tune their AI systems. Plus, the proposal is arguing to centralise the EU AI Office's power over AI systems built on general-purpose AI models. This, the Commission says, would reduce governance fragmentation. Targeted amendments to the AI Act will save SMEs and other smaller companies at least €225m a year, the EU said. The landmark AI Act entered into force last August, with specific sections of the law entering into application through a staggered approach so businesses have enough time to comply. However, the EU says that stakeholder consultations throughout the year has revealed that there are challenges around the rules' roll-out that needs addressing. Meanwhile, the EU is also targeting companies' reporting obligations around cybersecurity incidents. Currently, businesses must report cyber incidents under several laws, including the NIS2 Directive, the GDPR and the Digital Operational Resilience Act. The EU wants to change this in order to create a single interface where companies can meet all incident reporting obligations. Similarly, the EU also wants to cut down on the GDPR to "harmonise, clarify and simplify certain rules", which, it says will boost innovation and support compliance. While another one of its targets include cookie banner pop-ups on websites. This was proposed a few months ago and is aimed at both reducing regulatory burden for businesses and consent fatigue for users. The digital omnibus' legislative proposals will be submitted to the European Parliament and the Council for adoption. There is also a stress test consultation around how the rulebook delivers on its competitiveness objective, and examine the coherence and cumulative impact of the EU's digital rules. "We have all the ingredients in the EU to succeed. We have talent, infrastructure, a large internal single market. But our companies, especially our start-ups and small businesses, are often held back by layers of rigid rules," said EU executive vice-president for tech sovereignty, security and democracy Henna Virkkunen. "By cutting red tape, simplifying EU laws, opening access to data and introducing a common 'European Business Wallet' we are giving space for innovation to happen and to be marketed in Europe." Late last month, the Commission announced a new multibillion-euro 'Scaleup Europe Fund' which is expected to boost investments in scale-ups and close the gap with global deep-tech leaders. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[12]
Why Europe's craven capitulation to Big Tech confirms that the wheels have come off the Omnibus of supposed digital ethical leadership
After decades of macho posturing and pious hectoring of the Americans on how they ought to be running their tech industry, the Eurocrats caved to pressure from the silicon lobby today as the European Commission (EC) proposes watering down tough legislation that was previously pitched by Brussels as being essential to a future based on personal rights and democratic freedoms. The launch of the Digital Omnibus is being sold as a tidying-up of existing legislation, a sorting out of overlapping areas of laws such as GDPR, the e-Privacy Directive, the Data Act and the AI Act, and hey, look folks, less red tape! Isn't that nice for everyone? In reality, as EU Anti-Trust Chief Henna Virkkunnen wipes a little speck of self-righteousness from his eye and parrots the party line at the launch, it's actually a humiliating climbdown from Europe's self-appointed high moral ground and removes any legitimacy the next time some MEP clambers to their feet to declaim the bloc's latest set of commandments and empty threats to Silicon Valley. Among the most startling reverse ferrets is a decision to allow tech firms to use personal data to train AI models without asking for consent from the owner of that data, if there is 'legitimate interest' - a wooly catch-all term that will keep armies of lawyers very happy for a long, long time. Companies will also be exempt from having to register their AI systems in an EU database for high-risk systems if these are only used for 'narrow' or 'procedural' tasks. In the past, diginomica has been highly critical of the more hard line and pompous proclamations coming out of the European Union with regard to the US tech sector, many of which have been nakedly anti-American in tone and futilely protectionist in intent. And we've not been alone in that assessment - successive administrations in Washington have made similar accusations, including the more outwardly-facing ones, such as that of Barack Obama or Joe Biden. Today, with the febrile MAGA movement still in full flow, the Trump 2.0 White House has made no secret of its contempt for the whole concept of the EU, which the man himself argues (if that's not too strong e a word) was literally set up to specifically undermine the USA. It's utter rot as a thesis, of course, but when the MAGA faithful is ready to trump (sorry!) The White Queen's commitment to believe "six impossible things before breakfast" if they read them on Truth Social, what's a little thing like historical fact got to do with anything? Europe's climbdown began late in 2024 when EC President Ursula von der Leyen first announced the proposed Omnibus. She remains an enthusiastic backer of the idea, using her State of the European Union address in September to talk about cutting €8 billion a year in bureaucratic costs for European companies as a result of the 'simplification' of rules. The French Government has been particularly keen on pushing the revisions through with President Macron's administration going further and calling for an indefinite delay on rolling out further regulations around sustainability and corporate compliance, with the President insisting: We need to make a massive regulatory break, but we also need to review regulations, including recent ones, which are hampering our ability to innovate. Just this week, Macron called for a specific 12 month delay to the AI Act provisions regulating high-risk artificial intelligence system, a move that was backed by Germany and Denmark. So is that it done? Is the capitulation complete or will the goalposts keep on shifting a little bit more. On the face of it things are far from settled. Every EU member state is going to have to buy into this revisionism and already we've had the likes of Dutch MEP Kim van Sparrentak, herself the co-author of a number of pieces of legislation designed to curb Big Tech, setting out their stall and declaring: It is disappointing to see the European Commission cave under the pressure of the Trump administration and Big Tech lobbies. Elsewhere, a powerful coalition coalition of MEPs under the banner of S&D - the Group of the Progressive Alliance of Socialists & Democrats in the European Parliament - issued a statement insisting: Our European digital framework is more than a collection of individual acts. It is a regulatory model that has inspired international partners and positioned Europe as a normative power in global tech governance. Backtracking or deregulating would weaken the EU's influence in ongoing global dialogues on data protection, AI, and cybersecurity. Maintaining regulatory coherence is thus not only a matter of internal governance, but also of strategic sovereignty and credibility abroad. And it warns: The S&D will firmly oppose any attempt to reduce the level of protection owed to our citizens. Signatories to this declaration include powerful voices, such as that of Alex Agius Saliba MEP, Vice-President for Digital Agenda, and Brando MEP, AI Act Rapporteur. Meanwhile in an open letter to the Commission, 127 civil rights organizations dubbed the Digital Omnibus "the biggest rollback of digital fundamental rights in EU history", while three other groups - European Digital Rights (EDRi), the Irish Council for Civil Liberties (ICCL), and noyb, the European Center for Digital Rights - also issued a public condemnation, bluntly warning of worse to come: The Digital Omnibus does not stand in isolation. It forms part of a broader de-regulation trend that risks hollowing out hard-won protections across social, environmental and digital policy areas under the guise of simplification. Similar approaches have already weakened or delayed essential safeguards in areas such as due diligence, environmental standards and consumer protection. This erosion of the EU's rights-based model undermines the Union's credibility as a democratic and evidence-based regulator. It also fuels public mistrust at a time when adherence to the rule of law and the protection of fundamental rights should be strengthened, not weakened. And Renew Europe, the pro-European and centrist political group in the European Parliament, warned that the Omnibus moves would weaken data protection, legitimize intrusive profiling and expand non-consensual tracking: We stand ready to work with the Commission to streamline our acquis and reduce burdensome legislation, particularly for smaller actors and for our innovative technology and AI sectors. However, this must never come at the expense of our European values. We will stand firmly against those measures that purport to simplify the acquis but will undermine our privacy standards or weaken the protection of fundamental rights...We shouldn't undermine individuals' fundamental rights. We must champion a path that achieves both a Europe that is an economic leader and, at the same time, the global standard-bearer for fundamental rights. Well, who could have predicted that? All that smug self-congratulatory talk about Europe setting the global agenda around tech standards and regulation. Sound and fury signifying nothing! Or not very much at the very least as soon as Big Tech sends in the lobbyist dogs and there's someone sitting in the Oval Office who makes no secret of how little respect he has for you on the international stage. I'm all for simplification of regulations and the slashing back of red tape wherever possible, but this capitulation to the interests of Big Tech is a dangerous precedent and one that will have long term impact on the lives of millions. Shame on the spineless Eurocrats and inward investment chasing political opportunists who've pushed us to this point. And all power to those who will now have to take a stand to stop it in its current form from doing the damage it threatens to do. German Minister for Digital Transformation Karsten Wildberger, who supports kicking AI Act restrictions down the track, said at this week's European Digital Sovereignty summit in Berlin that the reason for doing this is: The world is moving so fast that we have to continuously re-work the rules. While that will be music to the ears of some in Washington and their paymasters in Silicon Valley, it's just an excuse for never doing anything.
[13]
Under pressure, EU to scale back digital rules
Brussels (Belgium) (AFP) - The EU will unveil plans on Wednesday to overhaul its AI and data privacy rules after coming under pressure from European and US companies. The proposals are part of the bloc's push to cut red tape to drive greater economic growth and help European businesses catch up with American and Chinese rivals -- and reduce dependence on foreign tech giants. EU tech chief Henna Virkkunen will present the plans on Wednesday alongside the justice commissioner in charge of data protection, Michael McGrath. Brussels has dismissed claims that its push to "simplify" its digital rules -- deeply unpopular in the United States -- are the result of pressure from US President Donald Trump's administration. For many EU states, the concern is that the focus on regulation has come at the expense of innovation -- although Brussels insists it remains committed to protecting European citizens' rights. Berlin hosted a Franco-German summit on Tuesday focused on propelling the bloc to lead in the AI race during which France's Emmanuel Macron said Europe does not want to be a "vassal" dependent on US and Chinese tech companies. Once proud of the "Brussels effect" -- referring to the influence many EU laws had on other jurisdictions around the world -- European lawmakers and rights defenders fear the EU appears to be withdrawing from being Big Tech's watchdog. The EU executive has its eye on changes to its landmark data protection rules and the AI law that only entered into force last year. There could be one proposal in Wednesday's package that would bring joy to nearly all Europeans: Brussels wants to tackle the annoying cookie banners that demand users' consent for tracking on websites. Crumbling cookie banners Based on draft documents that could still change and EU officials, Brussels plans to: - redefine personal data and how companies can use it, for example allowing firms to process such data to train AI models "for purposes of a legitimate interest", but rights defenders have warned this could downgrade users' privacy - a one-year pause on implementing many provisions on high-risk AI, for example, models that can pose dangers to safety, health or citizens' fundamental rights -- a move that will please American and European firms. Instead of taking effect next year, the provisions would apply from 2027. Dozens of Europe's biggest companies, including France's Airbus and Germany's Lufthansa and Mercedes-Benz, had called for a pause in July on the AI law which they warn risks stifling innovation. Brussels has insisted European users' data privacy will be protected. 'Complex' rules One lawmaker from EU chief Ursula von der Leyen's conservative EPP grouping supported the push to simplify the digital rules. "Europe's problem is in the excessive complexity and inconsistency of the rules we already have. Laws built in silos, overlapping obligations, and uneven enforcement create uncertainty for businesses and fracture the single market," MEP Eva Maydell told AFP. But von der Leyen could face a difficult road ahead as the changes will need the approval of both the EU parliament and member states. Her camp's main coalition partners have already expressed concern. In letters sent to the European Commission last week, socialist EU lawmakers said they oppose any delay to the AI law, while the centrists warned they would stand firm against any changes that put privacy at risk.
[14]
EU to delay 'high risk' AI rules until 2027 after Big Tech pushback
The European Commission proposed easing and delaying parts of major tech laws, including shifting high-risk AI rules to 2027. The "Digital Omnibus" aims to cut red tape while maintaining strong regulation, simplifying cookie consent and allowing limited personal-data use for AI training under revised GDPR rules. The European Commission proposed on Wednesday streamlining and easing a slew of tech regulations, including delaying some provisions of its AI Act, in an attempt to cut red tape, head off criticism from Big Tech and boost Europe's competitiveness. The move by the EU comes after it watered down some environmental laws after blowback from business and the U.S. government. Europe's tech rules have faced similar opposition, though the Commission has said the rules will remain robust. "Simplification is not deregulation. Simplification means that we are taking a critical look at our regulatory landscape," a Commission official said during a briefing. 'High risk' AI use in job applications, biometrics In a 'Digital Omnibus', which will still face debate and votes from European countries, the Commission proposed to delay the EU's stricter rules on the use of AI in a range of areas seen as more high risk, to December 2027 from August 2026. That includes AI use in biometric identification, road traffic applications, utilities supply, job applications and exams, health services, creditworthiness and law enforcement. Consent for pop-up 'cookies' would also be simplified. The Digital Omnibus or simplification package covers the AI Act which became law last year, the landmark privacy legislation known as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act, among others. Proposed changes to the GDPR would also allow Alphabet's Google, Meta, OpenAI and other tech companies to use Europeans' personal data to train their AI models.
[15]
EU Calls for 'Cutting Red Tape' for Tech Companies | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The project, announced Wednesday (Nov. 19), centers around a "digital omnibus" that streamlines rules on artificial intelligence (AI), cybersecurity and data. According to the European Commission -- the EU's regulatory arm -- the plan would also be coupled with what it calls a Data Union Strategy to unlock high-quality data for AI and European Business Wallets that will provide companies with a single digital identity to simplify paperwork and make it easier to do business in EU countries. Henna Virkkunen, the EU's executive vice president for tech Sovereignty, security and democracy, said the plan will help startups and small businesses hindered by strict regulations. "By cutting red tape, simplifying EU laws, opening access to data and introducing a common European Business Wallet we are giving space for innovation to happen and to be marketed in Europe," Virkkunen said in a news release. "This is being done in the European way: by making sure that fundamental rights of users remain fully protected." The plan follows efforts by U.S. and American tech companies, along with the White House, to get the EU to reform its regulations. A report by Bloomberg News noted that even within Europe, there are critics who warn the region could fall behind the US and China in the AI race. With the European Business Wallet, companies will be able to digitalize operations and interactions so they won't need to carry them out in person. It will let businesses digitally sign, timestamp and seal documents, create, store, and exchange verified documents and enjoy secure communications with other businesses or public administrations throughout the EU. According to the commission's announcement, the plan also calls for simplified cybersecurity reporting, arguing that the current system requires companies to "report cybersecurity incidents under several different laws." This package creates "a single-entry point where companies can meet all incident-reporting obligations," the announcement added. To boost innovation and compliance, the commission is also proposing "targeted amendments" to the General Data Protection Regulation (GDPR) that "will harmonize, clarify and simplify certain rules," without reducing data protection standards. The ultimate goal, the commission said, is to reduce administrative burdens by at least 25% -- 35% for small businesses -- by the end of 2029.
[16]
EU to delay 'high risk' AI rules until 2027 after Big Tech pushback
BRUSSELS/STOCKHOLM (Reuters) -The European Commission proposed on Wednesday streamlining and easing a slew of tech regulations, including delaying some provisions of its AI Act, in an attempt to cut red tape, head off criticism from Big Tech and boost Europe's competitiveness. The move by the EU comes after it watered down some environmental laws after blowback from business and the U.S. government. Europe's tech rules have faced similar opposition, though the Commission has said the rules will remain robust. "Simplification is not deregulation. Simplification means that we are taking a critical look at our regulatory landscape," a Commission official said during a briefing. 'HIGH RISK' AI USE IN JOB APPLICATIONS, BIOMETRICS In a 'Digital Omnibus', which will still face debate and votes from European countries, the Commission proposed to delay the EU's stricter rules on the use of AI in a range of areas seen as more high risk, to December 2027 from August 2026. That includes AI use in biometric identification, road traffic applications, utilities supply, job applications and exams, health services, creditworthiness and law enforcement. Consent for pop-up 'cookies' would also be simplified. The Digital Omnibus or simplification package covers the AI Act which became law last year, the landmark privacy legislation known as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act, among others. Proposed changes to the GDPR would also allow Alphabet's Google, Meta, OpenAI and other tech companies to use Europeans' personal data to train their AI models. (Reporting by Supantha Mukherjee in Stockholm and Jan Strupczewski and Foo Yun Chee in Brussels)
Share
Share
Copy Link
The European Commission proposes significant changes to its landmark GDPR privacy law and AI Act, including simplified cookie banners and delayed AI regulations, marking a major shift from Europe's traditionally tough stance on tech regulation.
The European Commission announced sweeping changes to its tech regulatory framework on Wednesday, proposing significant rollbacks to both the General Data Protection Regulation (GDPR) and the AI Act. The moves represent a dramatic shift from Europe's traditionally aggressive stance toward Big Tech regulation, coming after months of intense lobbying from major technology companies and pressure from the Trump administration
1
.
Source: France 24
The proposed "Digital Omnibus" package includes simplifying the notorious cookie permission pop-ups that have become synonymous with European web browsing, allowing users to give consent with one click and save preferences across browsers. More significantly, the changes would permit tech giants like Google, Meta, and OpenAI to use Europeans' personal data to train their AI models
4
.Perhaps most notably, the Commission proposed delaying stricter AI regulations for high-risk applications until December 2027, pushing back the original August 2026 timeline by over a year. These delayed rules cover AI use in biometric identification, road traffic applications, utilities supply, job applications, health services, creditworthiness, and law enforcement
4
.
Source: FT
The AI Act itself has faced significant implementation challenges since its passage in December 2023. Originally designed as a risk-based approach to AI regulation, the legislation was hastily rewritten to include rules for general-purpose AI models like ChatGPT after the technology's explosive popularity. This rushed inclusion has created what critics describe as a "chaotic implementation" that has turned the Act from a symbol of European leadership into a cautionary tale
3
.The proposed changes have sparked fierce opposition from privacy campaigners across Europe. Austrian privacy advocate Max Schrems, known for his legal battles against Meta over privacy violations, called the proposals "the biggest attack on European's digital rights in years." Schrems argued that the reforms appear primarily designed to remove obstacles preventing AI companies from using personal data, warning that politicians are "throwing protections out of the window" in pursuit of an "AI race"
1
.Johnny Ryan, director of the Enforce unit at the Irish Council for Civil Liberties, criticized the changes as likely to "entrench the dominance of US and Chinese digital giants" rather than helping European companies compete. Ryan argued that Europe's problem isn't excessive regulation but rather the failure to enforce existing rules effectively
1
.
Source: CNET
Related Stories
The Commission's reversal comes amid mounting pressure from multiple sources. The Trump administration has been pushing for reduced regulatory burdens on US tech companies operating in Europe, while high-profile figures like former Italian Prime Minister Mario Draghi have urged the bloc to weaken tech regulation to boost economic competitiveness
2
.Europe's struggle to develop credible competitors to US and Chinese AI giants has intensified these pressures. With very few exceptions, European companies have failed to emerge as serious contenders in the global AI race dominated by companies like DeepSeek, Google, and OpenAI
2
.Executive Vice President Henna Virkkunen framed the changes as "a face-lift with targeted amendments that reflect how technology has evolved," emphasizing the goal of encouraging AI development while maintaining high standards
1
.The proposals now face debate and votes from European countries, with significant political opposition expected given the GDPR's status as a cornerstone of European tech policy
5
.Summarized by
Navi
10 Nov 2025•Policy and Regulation

10 Apr 2025•Policy and Regulation

07 Nov 2025•Policy and Regulation

1
Science and Research

2
Technology

3
Policy and Regulation
