13 Sources
[1]
Meta refuses to sign EU's AI code of practice | TechCrunch
Meta has refused to sign the European Union's code of practice for its AI Act, weeks before the bloc's rules for providers of general-purpose AI models take effect. "Europe is heading down the wrong path on AI," wrote Meta's chief global affairs officer Joel Kaplan in a post on LinkedIn. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The EU's code of practice -- a voluntary framework published earlier this month -- aims to help companies implement processes and systems to comply with the bloc's legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services; bans developers from training AI on pirated content; and comply with content owners' requests to not use their works in their data sets. Calling the EU's implementation of the legislation "over-reach," Kaplan claimed that the law will "throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them." A risk-based regulation for applications of artificial intelligence, the AI Act bans some "unacceptable risk" use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of "high-risk" uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk and quality management obligations. Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out. But the Commission held firm, saying it will not change its timeline. Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of "general-purpose AI models with systemic risk," like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by that date.
[2]
Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines
BRUSSELS, July 18 (Reuters) - Microsoft (MSFT.O), opens new tab will likely sign the European Union's code of practice to help companies comply with the bloc's landmark artificial intelligence rules, its president told Reuters on Friday, while Meta Platforms (META.O), opens new tab rebuffed the guidelines. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. They will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The code is part of the AI Act which came into force in June 2024 and will apply to Google owner Alphabet (GOOGL.O), opens new tab, Facebook owner Meta (META.O), opens new tab, OpenAI, Anthropic, Mistral and thousands of companies. "I think it's likely we will sign. We need to read the documents," Microsoft President Brad Smith told Reuters. "Our goal is to find a way to be supportive and at the same time one of the things we really welcome is the direct engagement by the AI Office with industry," he said, referring to the EU's regulatory body for AI. Meta reiterated its criticism of the code. "Meta won't be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a blog post on LinkedIn on Friday. The U.S. social media giant has the same concerns as a group of 45 European companies, he said. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. OpenAI and Mistral have signed the code. Reporting by Foo Yun Chee; Editing by Cynthia Osterman Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Social Impact * Boards Foo Yun Chee Thomson Reuters An agenda-setting and market-moving journalist, Foo Yun Chee is a 21-year veteran at Reuters. Her stories on high profile mergers have pushed up the European telecoms index, lifted companies' shares and helped investors decide on their next move. Her knowledge and experience of European antitrust laws and developments helped her break stories on Microsoft, Google, Amazon, Meta and Apple, numerous market-moving mergers and antitrust investigations. She has previously reported on Greek politics and companies, when Greece's entry into the eurozone meant it punched above its weight on the international stage, as well as on Dutch corporate giants and the quirks of Dutch society and culture that never fail to charm readers.
[3]
Meta says it won't sign the EU's AI code of practice
Its global affairs officer called the guidelines "over-reach." Meta said on Friday that it won't sign the European Union's new AI code of practice. The guidelines provide a framework for the EU's AI Act, which regulates companies operating in the European Union. The EU's code of practice is voluntary, so Meta was under no legal obligation to sign it. Yet Meta's Chief Global Affairs Officer, Joel Kaplan, made a point to publicly knock the guidelines on Friday. He described the code as "over-reach." "Europe is heading down the wrong path on AI," Kaplan posted in a statement. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." So, why kick up a (public) fuss about not signing something Meta was under no obligation to sign? Well, this isn't the first time the company has waged a PR battle against Europe's AI regulations. It previously called the AI Act "unpredictable," claiming "it goes too far" and is "hampering innovation and holding back developers." In February, Meta's public policy director said, "The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer." Outmuscling the EU may seem like a more attainable goal to Meta, given that it has an anti-regulation ally in the White House. In April, President Trump pressured the EU to abandon the AI Act. He described the rules as "a form of taxation." The EU published its code of practice on July 10. It includes tangible guidelines to help companies follow the AI Act. Among other things, the code bans companies from training AI on pirated materials and requires them to respect requests from writers and artists to omit their work from training data. It also requires developers to provide regularly updated documentation describing their AI features. Although signing the code of practice is voluntary, doing so has its perks. Agreeing to it can give companies more legal protection against future accusations of breaching the AI Act. Thomas Regnier, the European Commission's spokesperson for digital matters, added more color in a statement to Bloomberg. He said that AI providers who don't sign it "will have to demonstrate other means of compliance." As a consequence, they "may be exposed to more regulatory scrutiny." Companies that violate the AI Act can face hefty penalties. The European Commission can impose fines of up to seven percent of a company's annual sales. The penalties are a lower three percent for those developing advanced AI models.
[4]
Meta says it won't sign Europe AI agreement, calling it an overreach that will stunt growth
Meta Platforms declined to sign the European Union's artificial intelligence code of practice because it is an overreach that will "stunt" companies, according to global affairs chief Joel Kaplan. "Europe is heading down the wrong path on AI," Kaplan wrote in a post on LinkedIn Friday. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Last week, the European Commission published a final iteration of its code for general-purpose AI models, leaving it up to companies to decide if they want to sign. The rules, which go into effect next month, create a framework for complying with the AI Act passed by European lawmakers last year. It aims to improve transparency and safety surrounding the technology.
[5]
The EU just issued guidelines for AI safety, and Meta is already opting out
Meta is refusing to sign the European Union's new code of practice for artificial intelligence, a company leader said on Friday. The AI code is a part of the AI Act, which became law last year. The set of laws will apply to the largest AI models, and companies have until Aug. 2 next year to comply. However, the new AI code of practice is voluntary. Joel Kaplan, Meta's Chief Global Affairs Officer, issued a statement on LinkedIn announcing the company's decision to opt out of the code. This Tweet is currently unavailable. It might be loading or has been removed. "Europe is heading down the wrong path on AI," he wrote. "The European Commission's Code of Practice for GPAI will stifle the development & deployment of frontier AI models and hinder Europe's economic growth," he continued. The AI Act applies to AI models that the commission deems to carry systemic risks that could significantly affect "public health, safety, fundamental rights, or society," reports Reuters. It also includes the most well-known "foundational models" from companies like Meta, OpenAI, Google, and Anthropic. The European Commission, the primary executive arm of the EU, issued new guidelines earlier today to help AI companies comply with the AI Act. That follows the voluntary code of practice issued earlier this July, according to Bloomberg. AI model providers can choose whether or not to sign the code of practice, which will include copyright protections, safety guidelines, and transparency requirements. If a company chooses to sign the code, it could receive more legal protections if accused of violating the act. Companies found to breach the law could be fined up to 7 percent of their annual global revenue. Meta is the latest U.S. company to push back against the EU's widespread efforts to regulate AI. Others include Mistral AI and Airbus, companies that signed a letter earlier this year asking the EU Commission to delay enforcement of the law. "We shared concerns raised by these businesses," said Kaplan in his statement, "that this over-reach will throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them." Meanwhile, OpenAI has agreed to sign the EU's Code of Practice for General Purpose AI, saying in a statement, "Signing the Code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age."
[6]
Meta rebuffs EU's AI Code of Practice
The company has previously been critical on the Commission's work on GPAI, claiming it stifles innovation. US social media company Meta will not sign the EU's AI Code of Practice on General Purpose AI (GPAI), the company's Chief Global Affairs Officer Joel Kaplan said in a statement on Friday. "Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for GPAI models and Meta won't be signing it," he said, adding that the Code "introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The Commission last week released the Code, a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the Act and can anticipate more legal certainty, others will face more inspections. The AI Act's provisions affecting GPAI systems enter into force on 2 August. It will take another two years before the AI Act, which regulates AI systems according to the risk they pose to society, will become fully applicable. OpenAI, the parent company of ChatGPT, has said it will sign up to the Code once its ready. The drafting process of the Code was criticised by Big Tech companies as well as CEOs of European companies, claiming they need more time to comply with the rules. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The Code requires sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The member states are expected to give a green light as early as 22 July. The EU executive said it will publish the list of signatories on 1 August.
[7]
Meta won't implement voluntary AI guidelines proposed by EU - SiliconANGLE
Meta Platforms Inc. today declined to implement a set of guidelines that the European Union has proposed for the artificial intelligence sector. The guidelines are outlined in a document called the General-Purpose AI Code of Practice, or GPAI, that was released on July 10. It covers topics such as model safety and training data collection. Implementing the GPAI is voluntary for AI developers such as Meta. The framework is designed to help companies comply with the AI Act, a piece of legislation that the EU implemented last year. The law sets forth an extensive set of rules for AI developers. Algorithms deemed to be low-risk will only have to comply with a subset of the AI Act's provisions, while high-risk models will be more heavily regulated. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it," Joel Kaplan, Meta's Chief Global Affairs Officer, said in a statement today. "This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The GPAI comprises 55 pages organized into three chapters. Those chapters focus on three different topics: copyright, transparency and safety. Companies that sign the GPAI must bring their AI projects into compliance with the EU's copyright laws. Additionally, they are required to disclose various technical details about their AI models. The list includes a model's architecture and the manner in which its training data was obtained. The GPAI's third chapter, which focuses on AI model safety, is longer than the two other chapters combined. It outlines a series of measures that AI providers must implement to mitigate the risks associated with their algorithms. Those measures only have to be applied to the "most advanced AI models." Signatories' compliance with the GPAI will be overseen by the AI Office, a regulatory body that is also tasked with enforcing the AI Act. It's part of the European Commission, the EU's executive arm. Around the same time Meta announced that it won't sign the GPAI, the European Commission introduced a new set of guidelines for AI companies. The guidelines are designed to complement the GPAI. Their primary goal is to clarify a number of regulatory terms introduced in the AI Act. Under the legislation, an AI designated as a "general-purpose AI model" by regulators is subject to different rules than other algorithms. The guidelines released today specify that this designation is given to algorithms if they can generate text, audio, images or videos and were trained using more than 100,000 exaflops of processing power. One exaflop corresponds to the computing capacity of about 20 Blackwell B200 chips.
[8]
Meta calls EU's AI code overreach, will not sign it
Meta Platforms has declined to sign the European Union's code of practice for the AI Act, citing concerns about overreach and legal uncertainties for model developers. The voluntary framework aims to help companies comply with the AI Act, including copyright protections and transparency requirements. Meta Platforms said it won't sign the code of practice for Europe's new set of laws governing artificial intelligence, calling the guidelines to help companies follow the AI Act overreach. "Europe is heading down the wrong path on AI," Meta's head of global affairs Joel Kaplan said in a post on LinkedIn. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." The European Union published the code of practice earlier this month. It's a voluntary framework meant to help companies put processes in place to stay in line with the bloc's sprawling AI Act, and includes copyright protections for creators and transparency requirements for advanced AI models. It also requires developers to provide documentation to describe their AI models' features. Agreeing to the code can give companies more legal protection if they're accused of falling foul of the act. It's the latest flashpoint between US tech companies and European regulators seeking to rein in their market power. US President Donald Trump's administration, which has lambasted the bloc's tech regulations and fines as unfairly targeting US firms, had reached out to the EU to argue against the code of practice in April, before it was finalised. Dozens of European companies including ASML Holding, Airbus SE and Mistral AI also asked the European Commissionto suspend the AI Act's implementation for two years.
[9]
Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines
BRUSSELS (Reuters) -Microsoft will likely sign the European Union's code of practice to help companies comply with the bloc's landmark artificial intelligence rules, its president told Reuters on Friday, while Meta Platforms rebuffed the guidelines. Drawn up by 13 independent experts, the voluntary code of practice aims to provide legal certainty to signatories. They will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The code is part of the AI Act which came into force in June 2024 and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and thousands of companies. "I think it's likely we will sign. We need to read the documents," Microsoft President Brad Smith told Reuters. "Our goal is to find a way to be supportive and at the same time one of the things we really welcome is the direct engagement by the AI Office with industry," he said, referring to the EU's regulatory body for AI. Meta reiterated its criticism of the code. "Meta won't be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act," Meta's chief global affairs officer Joel Kaplan said in a blog post on LinkedIn on Friday. The U.S. social media giant has the same concerns as a group of 45 European companies, he said. "We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. (Reporting by Foo Yun Chee; Editing by Cynthia Osterman)
[10]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says -- 2nd Update
Meta Platforms' Chief Global Affairs Officer said the Facebook and Instagram owner wouldn't sign the European Union's code of practice for general-purpose artificial intelligence because it adds uncertainty and goes beyond the scope of AI legislation in the bloc. The European Commission, the EU's executive arm, last week published the final version of a code of practice for general-purpose AI that model providers can choose whether or not to sign. EU officials said the code included guidance on safety and security, transparency and copyright to help signatories comply with the bloc's wide-ranging legislation on AI. EU lawmakers last year approved the AI Act, a law that bans certain uses of the technology, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. Rules on general-purpose AI will be effective for companies as of Aug. 2. The commission's AI Office, a body that oversees implementation of the law, will enforce rules on new AI models after a year, and two years later for existing models. Companies that breach the law risk fines of up to 7% of their annual global revenue. "Europe is heading down the wrong path on AI," Meta's Chief Global Affairs Officer Joel Kaplan wrote in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Thomas Regnier, spokesman for the European Commission, said AI model providers that opt not to sign the code would still have to comply with the AI Act and might be exposed to more regulatory scrutiny. Meta is the latest tech giant to criticize the EU's push to regulate AI. Earlier this month, chief executives of European companies, including Mistral AI, ASML Holding and Airbus, signed a letter asking the commission to delay enforcement of the law, saying overlapping and complex regulations were preventing the EU from becoming an AI leader. "We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The announcement from Meta comes days after OpenAI said it would sign the code, subject to the current version being formally approved by the AI board--a body that includes representatives from each of the EU's 27 member states. "Signing the code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," the ChatGPT maker said last week. The EU is seeking to catch up with the U.S. and China on AI and wants to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times more than the number installed in AI factories being set up currently. OpenAI said it had submitted expressions of interest to take part in the process for the rollout of gigafactories in Europe. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI.
[11]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says -- Update
Meta Platforms won't sign the European Union's code of practice for general-purpose artificial intelligence because it adds legal uncertainty and brings in measures that go beyond the scope of AI legislation in the bloc, Chief Global Affairs Officer Joel Kaplan said. The European Commission, the EU's executive arm, last week published the final version of a code of practice for general-purpose AI that model providers can choose whether or not to sign. EU officials said the code included guidance on safety and security, transparency and copyright to help signatories comply with the bloc's wide-ranging legislation on AI. EU lawmakers approved the AI Act last year, a law that bans certain uses of the technology, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. Rules on general-purpose AI will be effective for companies as of Aug. 2. The commission's AI Office, a body that oversees implementation of the law, will enforce rules on new AI models after a year, and two years later for existing models. Companies that breach the law risk fines of up to 7% of their annual global revenue. "Europe is heading down the wrong path on AI," Kaplan wrote in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." Meta is the latest tech giant to criticize the EU's push to regulate AI. Earlier this month, chief executives of European companies, including Mistral AI, ASML Holding and Airbus, signed a letter asking the commission to delay enforcement of the law, saying overlapping and complex regulations were preventing the EU from becoming an AI leader. "We share concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them," Kaplan said. The announcement from Meta comes days after OpenAI said it would sign the code, subject to the current version being formally approved by the AI board--a body that includes representatives from each of the EU's 27 member states. "Signing the code reflects our commitment to providing capable, accessible, and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age," the ChatGPT maker said last week. The EU is seeking to catch up with the U.S. and China on AI and wants to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times more than the number installed in AI factories being set up currently. OpenAI said it had submitted expressions of interest to take part in the process for the rollout of gigafactories in Europe. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI.
[12]
Meta Won't Sign EU's AI Code of Practice, Chief Global Affairs Officer Says
Meta Platforms won't sign the European Union's code of practice for general-purpose artificial intelligence because it adds legal uncertainty and brings in measures that go beyond the scope of AI legislation in the bloc, Chief Global Affairs Officer Joel Kaplan said. The European Commission, the EU's executive arm, last week published the final version of a voluntary code of practice. Model providers can choose whether to sign the code, which EU officials said would help companies to comply with the bloc's wide-ranging legislation on AI. EU lawmakers approved the AI Act last year, a law that bans certain uses of AI, rolls out new transparency guidelines and requires risk assessments for AI systems that are deemed high-risk. "Europe is heading down the wrong path on AI," Kaplan said in a LinkedIn post. "This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act."
[13]
Meta rejects EU's voluntary AI rules: Here's why
The EU says GPAI developers like Meta must comply by 2027 or face increased regulatory scrutiny and must prove alternative compliance methods. Meta has refused to sign the newly introduced European Union's voluntary Code of Practice for general-purpose AI (GPAI) models. The European Commission, the EU's executive arm, introduced a voluntary code of practice last week, and it was set to come into force on August 2. Meta, along with Microsoft, Alphabet, and Mistral AI, have pushed back on the legislation, urging the EU to delay its implementation. However, the EU refuses to change its timeline. Now, Meta's Chief Global Affairs Officer, Joel Kaplan, criticised the code, calling it legally uncertain and overly restrictive in a LinkedIn post. He wrote, "Europe is heading down the wrong path on AI." Kaplan further added that Meta has carefully reviewed the European Commission's Code of Practice for general-purpose AI models, and they won't be signing it, as the EU's AI Code introduces legal uncertainties for model developers and has measures that go far beyond the scope of the AI Act. Introduced earlier this month, the voluntary framework is designed to help AI companies prepare for the bloc's legislation for regulating AI. The code requires companies to maintain and update documentation on their AI systems, refrain from using pirated content for training, and comply with content owners' requests to opt out of AI training datasets. Also read: Apple files lawsuit against YouTuber Jon Prosser for leaking iOS 26 features Kaplan called the EU's approach an "over-reach," warning that it could throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them. He also highlighted that as many as 44 businesses and policymakers across Europe, including Bosch, Siemens, SAP, Airbus and BNP, have signed a letter urging the Commission to delay the implementation of new AI regulation. However, the European Commission released updated guidelines on Friday stating that the companies that develop GPAI models with "systemic risk," including Meta, must fully comply with the rules by August 2027. Businesses failing to do so "will have to demonstrate other means of compliance" or "more regulatory scrutiny," EU spokesperson Thomas Regnier said in a statement.
Share
Copy Link
Meta declines to sign the European Union's voluntary AI code of practice, calling it an overreach that could stifle innovation and economic growth in Europe. The decision highlights growing tensions between tech giants and EU regulators over AI governance.
Meta, the parent company of Facebook, has taken a firm stance against the European Union's newly introduced AI code of practice. Joel Kaplan, Meta's Chief Global Affairs Officer, announced the company's decision not to sign the voluntary framework, describing it as an "overreach" that could potentially hinder innovation and economic growth in Europe 1.
Source: Digit
Kaplan stated, "Europe is heading down the wrong path on AI," and expressed concerns about the legal uncertainties the code introduces for model developers 4. He argued that the measures proposed in the code go beyond the scope of the AI Act, potentially throttling the development and deployment of frontier AI models in Europe 2.
The European Union's code of practice is a voluntary framework designed to help companies implement processes and systems to comply with the bloc's AI Act. Key aspects of the code include:
The code is part of the broader AI Act, which came into force in June 2024 and applies to major tech companies like Google, Meta, OpenAI, and Anthropic 2.
Meta's decision highlights a growing divide within the tech industry regarding AI regulation. While Meta has chosen not to sign the code, other major players have taken different stances:
Source: Reuters
While signing the code is voluntary, it offers certain advantages to companies:
The AI Act includes significant penalties for violations, with fines of up to 7% of a company's annual sales for general violations, and 3% for those developing advanced AI models 3.
Source: Economic Times
Meta's decision comes amidst ongoing debates about AI regulation globally. The company has previously criticized the AI Act, calling it "unpredictable" and claiming it goes too far in regulating the industry 3. This stance aligns with some other tech companies and political figures who have expressed concerns about the potential impact of strict regulations on innovation and economic growth in the AI sector.
Summarized by
Navi
Netflix has incorporated generative AI technology in its original series "El Eternauta," marking a significant shift in content production methods for the streaming giant.
23 Sources
Technology
15 hrs ago
23 Sources
Technology
15 hrs ago
An advisory board convened by OpenAI recommends that the company should continue to be controlled by a nonprofit, emphasizing the need for democratic participation in AI development and governance.
6 Sources
Policy and Regulation
15 hrs ago
6 Sources
Policy and Regulation
15 hrs ago
Perplexity AI partners with Airtel to offer free Pro subscriptions, leading to a significant increase in downloads and user base in India, potentially reshaping the AI search landscape.
5 Sources
Technology
15 hrs ago
5 Sources
Technology
15 hrs ago
Perplexity AI, an AI-powered search engine startup, has raised $100 million in a new funding round, valuing the company at $18 billion. This development highlights the growing investor interest in AI startups and Perplexity's potential to challenge Google's dominance in internet search.
4 Sources
Startups
15 hrs ago
4 Sources
Startups
15 hrs ago
The European Commission has released guidelines to help AI models with systemic risks comply with the EU's new AI Act, aiming to clarify regulations and address industry concerns.
2 Sources
Policy and Regulation
15 hrs ago
2 Sources
Policy and Regulation
15 hrs ago