Curated by THEOUTPOST
On Mon, 3 Feb, 8:00 AM UTC
12 Sources
[1]
EU details which systems fall within AI Act's scope | TechCrunch
The European Union has published guidance on what constitutes an AI system under its new AI Act. The risk-based framework for regulating applications of artificial intelligence came into force last summer -- with the first compliance deadline (on banned use cases) kicking in last weekend. Determining whether a particular software system falls within the act's scope will be a key consideration for AI developers -- with the risk of fines of up to 7% of global annual turnover for breaches. The Commission's 13-page guidelines are likely to be closely parsed by companies. That said, as with E.U. guidance on prohibited uses put out earlier this week, the advice is non-binding. The Commission also adds that the guidance is "designed to evolve over time and will be updated as necessary, in particular in light of practical experiences, new questions, and use cases that arise".  Given how fast-paced the AI field can be, the task of understanding how the law applies is likely to remain a work in progress. "No automatic determination or exhaustive lists of systems that either fall within or outside the definition of an AI system are possible," the E.U. warns in its advice document. Set your expectations accordingly.
[2]
EU publishes guidelines on banned practices under AI Act
While not legally binding, the guidelines offer the EU's interpretations of the banned practices. The first set of obligations as part of the European Union Artificial Intelligence (AI) Act kicked in on 2 February. Now, the Commission has published guidelines on prohibited AI practices. As per the guidelines, companies would not be able to use an AI system to infer their employees' emotions, or use AI to "assess" a person's "risk" of committing a criminal offence. Moreover, the guidelines also prohibit the usage of AI-enabled "dark patterns" to coerce individuals into actions they may not do otherwise, or chatbots that use subliminal messaging to manipulate persons into making harmful financial decisions. The new guidelines published yesterday (4 February), which seek to increase the legal clarity around what the Commission prohibits regarding AI application, have not yet been adopted, nor are they legally binding, with the EU stating that the ultimate authority to interpret the AI Act rests with the court. "The guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the European Union. This initiative underscores the EU's commitment to fostering a safe and ethical AI landscape," the Commission said in a press release yesterday. The EU Act, which entered into force last August, is a landmark regulation meant to bring the growing power of AI under legal control. The European Law Institute's president Pascal Pichonnaz, in an interview last year, told SiliconRepublic.com that the Act's flexibility would allow for it to adapt to new risks. The Act lays down rules around the use and deployment of AI, including dividing AI systems into risk categories, with some, such as social security benefits providers using AI to evaluate persons, being prohibited, categorised as posing "unacceptable risks" to fundamental rights. Penalties under the AI Act are hefty, with developers engaging in prohibited practices being liable for up to €35m or 7pc of their total global annual turnover, whichever is higher. The next set of obligations, centred around general purpose AI models will be applicable in August, according to the Commission's timeline, while most of the AI Act will be fully applicable by August next year. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[3]
EU puts out guidance on uses of AI that are banned under its AI Act | TechCrunch
The first compliance deadline kicked in a couple of days ago for the European Union's AI Act, a risk-based framework for regulating uses of artificial intelligence -- banning a narrow selection of so called "unacceptable risk" use-cases of AI, such as social scoring that could lead to detrimental or unfavourable treatment; or harmful manipulation using "subliminal techniques". On Tuesday the EU's executive body, the Commission, followed that fixed deadline up by putting out guidance for developers on how to comply with this aspect of the bloc's AI rulebook. Developers seeking help with staying on the right side of the law when it comes to applying artificial intelligence in the region are likely to be keen to parse the advice. Breaches of the law's rules on prohibited use-cases can attract the stiffest penalties: up to 7% of global turnover (or €35 million, whichever is greater). "The guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the European Union," the Commission wrote in a press release. However it acknowledged that the guidance it has produced is not legally binding -- it will, ultimately, be up to regulators and courts to enforce and adjudicate the AI Act. "The guidelines provide legal explanations and practical examples to help stakeholders understand and comply with the AI Act's requirements," the Commission added, saying the initiative "underscores [its] commitment to fostering a safe and ethical AI landscape". The guidelines -- which can be downloaded here -- have been published today in draft form. This is because formal adoption and application remains pending as the EU still needs to produce translations in the bloc's myriad official languages. While the AI Act became law across the region last year its implementation continues, with additional compliance deadlines set to kick in over the coming months and years. Enforcement is likely to be further staggered -- even in the case of the prohibited use-cases -- since EU Member States have until August 2 to designate the bodies responsible for overseeing the rulebook.
[4]
European Union issues guidance on how to not violate the AI Act's 'prohibited use' section
Companies worldwide now officially required to comply with the European Union's expansive AI Act, which seeks to mitigate many of the potential harms posed by the new technology. The EU Commission on Tuesday issued additional guidance on how firms can ensure their generative models measure up to the Union's requirements and remain clear of the Act's "unacceptable risk" category for AI use cases, which are now banned within the economic territory. The AI Act was voted into law in March, 2024, however, the first compliance deadline came and passed just a few days ago on February 2, 2025. The EU has banned eight uses of AI specifically: Recommended Videos Harmful AI-based manipulation and deception Harmful AI-based exploitation of vulnerabilities Social scoring Individual criminal offence risk assessment or prediction Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases Emotion recognition in workplaces and education institutions Biometric categorisation to deduce certain protected characteristics Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces Companies found in violation of the prohibited use cases could face fines totaling 7% of their global turnover (or €35 million, whichever is greater). This is only the first of many similar compliance deadlines that will be enforced in the coming months and years, as the technology evolves. While the Commission does concede that these guidelines are, in and of themselves, not legally binding, it does note in its announcement post that "the guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the European Union." "The guidelines provide legal explanations and practical examples to help stakeholders understand and comply with the AI Act's requirements," the Commission added. Don't expect violators to be dragged into court in the immediate future, however. The AI Act's rules are being implemented gradually over the next two years with the final the final phase occurring on August 2, 2026.
[5]
AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch
As of Sunday in the European Union, the bloc's regulators can ban the use of AI systems they deem to pose "unacceptable risk" or harm. February 2 is the first compliance deadline for the EU's AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what's now following is the first of the compliance deadlines. The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments. Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely. Some of the unacceptable activities include: Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect." The February 2 deadline is in some ways a formality. Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. As part of the Pact, signatories -- which included Amazon, Google, and OpenAI -- committed to identifying AI systems likely to be categorized as high risk under the AI Act. Some tech giants, notably Meta and Apple, skipped the Pact. French AI startup Mistral, one of the AI Act's harshest critics, also opted not to sign. That isn't to suggest that Apple, Meta, Mistral, or others who didn't agree to the Pact won't meet their obligations -- including the ban on unacceptably risky systems. Sumroy points out that, given the nature of the prohibited use cases laid out, most companies won't be engaging in those practices anyway. "For organizations, a key concern around the EU AI Act is whether clear guidelines, standards, and codes of conduct will arrive in time -- and crucially, whether they will provide organizations with clarity on compliance," Sumroy said. "However, the working groups are, so far, meeting their deadlines on the code of conduct for ... developers." There are exceptions to several of the AI Act's prohibitions. For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a "targeted search" for, say, an abduction victim, or to help prevent a "specific, substantial, and imminent" threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can't make a decision that "produces an adverse legal effect" on a person solely based on these systems' outputs. The Act also carves out exceptions for systems that infer emotions in workplaces and schools where there's a "medical or safety" justification, like systems designed for therapeutic use. The European Commission, the executive branch of the EU, said that it would release additional guidelines in "early 2025," following a consultation with stakeholders in November. However, those guidelines have yet to be published. Sumroy said it's also unclear how other laws on the books might interact with the AI Act's prohibitions and related provisions. Clarity may not arrive until later in the year, as the enforcement window approaches. "It's important for organizations to remember that AI regulation doesn't exist in isolation," Sumroy said. "Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, creating potential challenges -- particularly around overlapping incident notification requirements. Understanding how these laws fit together will be just as crucial as understanding the AI Act itself."
[6]
The EU is now enforcing the AI Act, banning high-risk AI systems - SiliconANGLE
The EU is now enforcing the AI Act, banning high-risk AI systems The first requirements of the European Union's AI Act have now come into force, slamming the door shut on artificial intelligence systems deemed "unacceptably risky". The first compliance deadline for the AI Act came into force on February 2, giving regulators the power to erase entire products if they decide it's necessary to do so. Moreover, the EU is warning AI makers that if they decide to try and break its rules, they'll potentially be hit with a fine of up to €35 million (around $36 million), or 7% of their global revenue - whichever costs more. Lawmakers in the European Parliament approved the AI Act in March last year after years of wrangling over the fine points. The Act came into force on August 1, but it's only now that regulators have the power to clamp down on those who don't comply. The Act specifies a number of AI systems that it deems unacceptable, including those similar to China's dystopian social credit system, which adjusts people's credit ratings based on their behavior or reputation. In addition, the Act also bans AI systems that try to subvert people's choice through sneaky tricks like subliminal messaging. AI systems that attempt to profile vulnerable people, such as those affected by disabilities or underage people, are also off the table. In addition, law enforcement agencies and other organizations are not allowed to use systems that attempt to predict if someone will commit a criminal offense, based on their facial features. Real-time monitoring systems for law enforcement are also tightly regulated, and can only be used in some very specific situations. What that means is the police are unable to use facial recognition tools at public events or subway stations to try and identify terrorist suspects, for example. Other systems, such as those that mine biometric data and make generalizations about people's political beliefs, gender and sexual orientation have also been banned. Also culled are "emotion-tracking" AI systems, except for a few instances where they're tied to medical treatment or safety. The EU notes that its bans apply to any company that operates within the borders of its member states. So even U.S. firms that live outside the EU cannot offer such systems to EU citizens. The Act is the most concrete effort to regulate the use of AI by any government organization so far, and the majority of U.S. technology companies have indicated that they're willing to comply with it. In September, more than 100 organizations, including Google LLC, Amazon.com Inc., Microsoft Corp. and OpenAI signed a voluntary pledge known as the "EU AI Pact", where they agreed to start complying with the regulations before they came into force. A number of high-profile companies refused to join the initiative, however. Meta Platforms Inc., Apple Inc. and the French AI startup Mistral refused to go along with the pact, saying that the EU's regulations are too rigid and will stifle innovation in the AI industry. Their refusal to sign the pact doesn't mean they're exempted from the law, though, so if they operate any AI systems that contravene the EU's rules they will be slapped with some heavy fines all the same.
[7]
EU Lays Out Guidelines on Misuse of AI by Employers, Websites and Police
BRUSSELS (Reuters) - Employers will be banned from using artificial intelligence to track their staff's emotions and websites will not be allowed to use it to trick users into spending money under EU AI guidelines announced on Tuesday. The guidelines from the European Commission come as companies grapple with the complexity and cost of complying with the world's first legislation on the use of the technology. The Artificial Intelligence Act, binding since last year, will be fully applicable on Aug. 2, 2026, with certain provisions kicking in earlier, such as the ban on certain practices from Feb. 2 this year. "The ambition is to provide legal certainty for those who provide or deploy the artificial intelligence systems on the European market, also for the market surveillance authorities. The guidelines are not legally binding," a Commission official told reporters. Prohibited practices include AI-enabled dark patterns embedded in services designed to manipulate users into making substantial financial commitments, and AI-enabled applications which exploit users based on their age, disability or socio-economic situation. AI-enabled social scoring using unrelated personal data such as origin and race by social welfare agencies and other public and private bodies is banned, while police are not allowed to predict individuals' criminal behaviour solely based on their biometric data if this has not been verified. Employers cannot use webcams and voice recognition systems to track employees' emotions, while mobile CCTV cameras equipped with AI-based facial recognition technologies for law enforcement purposes are prohibited, with limited exceptions and stringent safeguards. EU countries have until Aug. 2 to designate market surveillance authorities to enforce the AI rules. AI breaches can cost companies fines ranging from 1.5% to 7% of their total global revenue. The EU AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. (Reporting by Foo Yun Chee; Editing by Alison Williams)
[8]
AI Act Goes Live: EU Bans High-Risk AI, Fines up to €35M
The news comes as a newly published international AI safety report warned of General-purpose AI's rising threat. Regulators in the European Union can now ban AI systems they deem to be an "unacceptable risk" to society and impose hefty fines for continued use. The new rule, implemented on Sunday, Feb. 2, marks the first compliance deadline for the EU's AI Act -- which came into force in August 2024. Companies and developers across the EU are racing to comply with a series of staggered deadlines for the bloc's sweeping AI framework, most of which will be applicable by mid-2026. AI Act Begins The rule that came into power on Sunday concerns AI systems found to be at the EU's highest level of risk. The AI Act looks at AI systems with four levels of threat: minimal risk, limited risk, high risk, and unacceptable risk. Developers or businesses found to be using AI applications that fall under the threat level could be fined up to €35 million. The EU AI Act states that companies could be fined up to 7% of their annual revenue from the previous year. AI systems under the unacceptable risk level includes : As companies and developers continue to ensure compliance with the rules, much remains to be determined about how the AI Act will be enforced. EU lawmakers have promised to release additional guidelines on the rules. General-Purpose AI is Growing In January, ahead of the AI Action Summit in France, an International AI Safety Report was released. The report aimed to establish the "first comprehensive, shared scientific understanding of advanced AI systems and their risks." The report, which brought together insights from 100 independent international experts, warned against the growing threat of general-purpose AI General-purpose AI refers to a form of technology that is more akin to human intelligence. Unlike narrow AI, which is designed for specific tasks, general-purpose AI can more adaptably understand, learn, and apply knowledge. "A few years ago, the best large language models could rarely produce a coherent paragraph of text," the report stated. "As general-purpose AI becomes more capable, evidence of additional risks is gradually emerging," it added. "These include risks such as large-scale labor market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI." The report notes that some experts believe these risks are decades away, while some think they could lead to societal harm within the next few years. Henna Virkkunen, executive vice president of the European Commission for Technological Sovereignty, said the bill would "protect our citizens." Just days into his inauguration, President Donald Trump revoked Executive Order 14110, which former President Biden had signed in October 2023 to address the risks associated with AI. OpenUK CEO Amanda Brock told CCN that Trump has effectively eliminated the need for AI models to undergo checking before they are released. "Supporters will argue that this move will help speed up the innovation process and keep the U.S. at the forefront of the AI market," Brock said. "For those against this move, it is one that puts technology innovation and potential profit ahead of personal privacy or security of data." Brock, who is set to host the State of Open Con in London on Feb. 4, said the new U.S. Government wants to move faster around AI, but it doesn't mean the AI community has to sacrifice safety and privacy requirements. "Software communities can take the lead around keeping that mindset in place around safety, security, and privacy through collaborating with each other," Brock said. "This makes it easier for everyone to benefit."
[9]
EU kicks off landmark AI law enforcement as first batch of restrictions enter into force
The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act. The European Union formally kicked off enforcement of its landmark artificial intelligence law Sunday, paving the way for tough restrictions and potential large fines for violations. The EU AI Act, a first-of-its-kind regulatory framework for the technology, formally entered into force in August 2024. On Sunday, the deadline for prohibitions on certain artificial intelligence systems and requirements to ensure sufficient technology literacy among staff officially lapsed. That means companies must now comply with the restrictions and can face penalties if they fail to do so. The AI Act bans certain applications of AI which it deems as posing "unacceptable risk" to citizens. Those include social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and "manipulative" AI tools. Companies face fines of as much as 35 million euros ($35.8 million) or 7% of their global annual revenues -- whichever amount is higher -- for breaches of the EU AI Act. The size of the penalties will depend on the infringement and size of the company fined. That's higher than the fines possible under the GDPR, Europe's strict digital privacy law. Companies face fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.
[10]
EU pushes forward with enforcing AI Act despite Donald Trump's warnings
Brussels is readying new guidance on banned uses of artificial intelligence under its landmark legislation regulating the technology, pushing ahead with enforcement of its AI Act even as Donald Trump warns of retribution for the EU's targeting of US tech companies. The law, passed in 2023, is considered the world's most comprehensive regulatory framework for AI. Provisions banning certain applications, such as scraping the internet to create facial recognition databases, came into force on Sunday. The European Commission is set to publish crucial guidance on how these rules should be applied by companies on Tuesday, officials said. Further provisions targeting large AI models and AI-powered products that pose a high risk to users, such as healthcare, will be rolled out between now and 2027. The continued push to enforce the rules comes amid broader European debate over how aggressively the bloc should enforce its digital rules in the face of fierce backlash from Big Tech companies supported by the new US president. Trump has threatened to target Brussels in response to fines imposed on US companies. The EU has already moved to "reassess" probes into companies such as Apple, Meta and Google under other legislation aimed at protecting the continent's digital markets. "There is definitely a worry in Brussels that the new US president will raise pressure on the EU around the AI Act to ensure that US companies don't have to deal with too much red tape, or potentially even fines," said Patrick Van Eecke, co-chair of law firm Cooley's global cyber, data and privacy practice. The act requires companies building "high-risk" AI systems to be more transparent about how they build and use AI models. Those behind the most powerful models face additional requirements, such as conducting risk assessments. Companies that fail to comply with the law face huge fines and could be banned from the EU. Brussels' ambition to position itself as "the global hub for trustworthy AI" has long been challenged by Big Tech groups. Companies such as Facebook owner Meta have explicitly warned that Europe's stringent regulation could stifle AI investment and innovation. Big Tech companies oppose the AI Act's "onerous" provisions on providing more transparency for data. This includes rules allowing third parties access to the code of AI models to assess risks, as well as the AI Act's exceptions to some safety rules for open source companies and smaller start-ups, the person close to the process said. Earlier this month, Trump warned that he regarded any moves by Brussels against US companies as "a form of taxation . . . We have some very big complaints with the EU," he said in remarks at the World Economic Forum in Davos. During his first week in office, Trump has touted a $500bn AI infrastructure project dubbed Stargate led by Japan's SoftBank and San Francisco-based OpenAI. He has criticised efforts to regulate AI, signing executive orders that eliminate many guardrails around the development of the technology. One senior EU official involved in the AI Act's implementation told the Financial Times that the commission acknowledged Trump's veiled threat and the US pressure but insisted that the law as passed would not be altered. "What we can do is ensure that it is as innovation-friendly as possible, and that's what we're doing right now," the official said. "There's flexibility in the rules and we're looking and how we use that." Since Trump's inauguration, the narrative around tech regulation has also shifted in Brussels, said Caterina Rodelli, an EU policy analyst at digital rights group Access Now. The group has been lobbying for the AI Act's bans to be stronger. "What we thought was closed business is actually not," she said. "We see room for regulators to loosen their approach to the implementation of the AI Act, and the prohibition implementation will be the first testing ground," Rodelli said, adding there was a risk that a new deregulatory approach would water the rules down until they were essentially meaningless. The EU prohibitions announced on Sunday were clear-cut, said a person close to the process, with many Big Tech companies already in compliance. Causing additional tension in Brussels are negotiations around the Code of Practice for general-purpose AI, affecting powerful AI models such as Google's Gemini and OpenAI's GPT-4, the person said. The code will detail how companies can implement the rules of the AI Act in practice. The negotiations, which involve hundreds of participants and are co-ordinated by the commission's AI Office, will end in April.
[11]
The EU is enforcing its first wave of AI restrictions.
The AI Act's February 2nd deadline has passed, making artificial intelligence systems that carry "unacceptable risk" illegal -- including some predictive policing tools, social scoring systems, and biometric identification that categorize people by race, sexual orientation, and religion. Deadlines restricting additional AI systems will come into force every August until 2027. By contrast, President Donald Trump tossed out executive action aiming to establish AI safeguards in the US on his first day back in office.
[12]
EU Readies New AI Restrictions Despite Trump Pushback | PYMNTS.com
European regulators are reportedly preparing new guidance covering banned uses of artificial intelligence. The efforts to enforce Europe's landmark AI Act come even as President Donald Trump has warned the European Union of targeting American tech giants, the Financial Times (FT) reported Tuesday (Feb. 4). The EU adopted the AI Act in May. It is considered the most comprehensive set of rules governing AI globally. Provisions restricting moves like scraping the internet to create facial recognition databases, went into effect Sunday (Feb. 2), per the report. Now, the European Commission is ready to issue key guidance on how these rules should be applied, with further regulations governing large AI models and AI-powered products that pose a high risk to users -- such as healthcare offerings -- set to debut between now and 2027, the report said. It's part of a piece of legislation requiring companies building "high-risk" AI systems to provide greater transparency into their AI models. Companies creating the most powerful models face added requirements, like risk assessments, and firms that fail to comply with the law face fines and could be banned from the EU, according to the report. Trump has threatened to target Europe in reaction to fines levied against American companies, the report said. The EU is already reassessing investigations into tech giants like Apple and Google under other digital markets legislation. "There is definitely a worry in Brussels that the new U.S. president will raise pressure on the EU around the AI Act to ensure that U.S. companies don't have to deal with too much red tape or potentially even fines," said Patrick Van Eecke, co-chair of law firm Cooley's global cyber, data and privacy practice, per the report. Meanwhile, Trump is deviating from President Joe Biden when it comes to regulating AI in the United States. On his first day in office, Trump signed an executive order reversing Biden's order governing AI. Biden's order required the government to vet AI models from the likes of Google and OpenAI, while also establishing chief AI officers in federal agencies and creating frameworks that addressed ethical and security risks.
Share
Share
Copy Link
The European Union has begun enforcing the first phase of its AI Act, prohibiting AI systems deemed to pose "unacceptable risk." The EU has also issued guidelines to help companies comply with the new regulations.
The European Union has taken a significant step in regulating artificial intelligence with the implementation of the first phase of its AI Act. As of February 2, 2025, the EU now prohibits AI systems deemed to pose "unacceptable risk" to individuals and society 15. This landmark legislation aims to create a safe and ethical AI landscape across the EU member states.
The AI Act outlines several AI applications that are now banned within the EU 4:
These prohibitions are designed to protect fundamental rights and prevent potential misuse of AI technologies 2.
To assist companies in navigating the new regulations, the European Commission has published guidelines on prohibited AI practices 23. While these guidelines are not legally binding, they offer interpretations and practical examples to help stakeholders understand and comply with the AI Act's requirements 3.
Enforcement of the AI Act will be gradual, with member states having until August 2, 2025, to designate bodies responsible for overseeing the rulebook 3. However, companies found in violation of the prohibited use cases could face substantial penalties, including fines of up to 7% of global annual turnover or €35 million, whichever is greater 14.
The EU has also released guidance on what constitutes an AI system under the Act, acknowledging that the fast-paced nature of the AI field may require ongoing updates to the guidelines 1. The Commission emphasizes that no exhaustive list of systems falling within or outside the AI definition is possible, highlighting the need for case-by-case assessment 1.
Prior to the official implementation, over 100 companies, including tech giants like Amazon, Google, and OpenAI, signed the EU AI Pact, voluntarily pledging to apply the principles of the AI Act ahead of its enforcement 5. However, some notable companies, such as Meta, Apple, and French AI startup Mistral, did not join the pact 5.
Legal experts note that the AI Act does not exist in isolation and will interact with other regulatory frameworks such as GDPR, NIS2, and DORA 5. Understanding these interactions will be crucial for organizations seeking to comply with the new AI regulations.
As the AI Act continues to roll out, with additional compliance deadlines set for the coming months and years, it represents a significant shift in the global approach to AI regulation. The EU's proactive stance on AI governance is likely to influence policies and practices well beyond its borders, shaping the future of AI development and deployment worldwide.
Reference
[2]
[4]
The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.
3 Sources
3 Sources
The European Union's comprehensive AI regulations, known as the AI Act, will begin enforcement on August 1, 2024. This groundbreaking legislation aims to regulate AI systems based on their potential risks and impact on society.
2 Sources
2 Sources
The EU has released a new draft of the Code of Practice for General-Purpose AI, aiming to guide AI model makers in complying with the AI Act. The draft has sparked debates among industry stakeholders, highlighting the challenges of balancing innovation with regulation.
2 Sources
2 Sources
As AI regulations evolve globally, companies face new challenges in compliance and patent strategies. This article explores key compliance measures and the impact of the EU AI Act on patent approaches.
2 Sources
2 Sources
Major technology companies are pushing for changes to the European Union's AI Act, aiming to reduce regulations on foundation models. This effort has sparked debate about balancing innovation with potential risks of AI technology.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved