Curated by THEOUTPOST
On Tue, 1 Oct, 12:03 AM UTC
5 Sources
[1]
Europe gathers global experts to draft 'Code of Practice' for AI
The European Union is making strides toward shaping the future of artificial intelligence with the development of the first "General-Purpose AI Code of Practice" for AI models under its AI Act. According to a Sept. 30 announcement, the initiative is spearheaded by the European AI Office and brings together hundreds of global experts from academia, industry and civil society to collaboratively draft a framework that will address key issues such as transparency, copyright, risk assessment and internal governance. Nearly 1,000 participating in shaping EU's AI future The kick-off plenary, held online with nearly 1,000 participants, marked the beginning of a months-long process that will conclude with the final draft in April 2025. The Code of Practice is set to become a cornerstone for applying the AI Act to general-purpose AI models like large language models (LLMs) and AI systems integrated across various sectors. This session also introduced four working groups, led by distinguished industry chairs and vice-chairs, which will drive the development of the Code of Practice. These include notable experts like Nuria Oliver, an artificial intelligence researcher, and Alexander Peukert, a German copyright law specialist. These groups will focus on transparency and copyright, risk identification, technical risk mitigation and internal risk management. According to the European AI Office, these working groups will meet between October 2024 and April 2025 to draft provisions, gather stakeholder input and refine the Code of Practice through ongoing consultation. The EU's AI Act, passed by the European Parliament in March 2024, is a landmark piece of legislation that seeks to regulate the technology across the bloc. It was created to establish a risk-based approach to AI governance. It categorizes systems into different risk levels -- ranging from minimal to unacceptable -- and mandates specific compliance measures The act is especially relevant to general-purpose AI models due to their broad applications and potential for significant societal impact, often placing them in the higher-risk categories outlined by the legislation. However, some major AI companies, including Meta, have criticized the regulations as too restrictive, arguing that they could stifle innovation. In response, the EU's collaborative approach to drafting the Code of Practice aims to balance safety and ethics with fostering innovation. The multi-stakeholder consultation has already garnered over 430 submissions, which will help influence the writing of the code. The EU's goal is that by the following April, the culmination of these efforts will set a precedent for how general-purpose AI models can be responsibly developed, deployed and managed, with a strong emphasis on minimizing risks and maximizing societal benefits. As the global AI landscape evolves rapidly, this effort will likely influence AI policies worldwide, especially as more countries look to the EU for guidance on regulating emerging technologies.
[2]
European Commission appoints 13 experts to draft AI Code
The Code of Practice, aimed to give clarity to providers of General Purpose AI systems, should be ready by August 2025. The European Commission has today announced the list of independent experts from the EU, US and Canada tasked to lead work on drafting a Code of Practice on General Purpose Artificial Intelligence, which includes language models such as ChatGPT and Google Gemini. The 13 experts set to lead four different workstreams that should lead to a Code of Practice under the AI Act by April 2025 were named in a statement from the executive. The EU's AI Act, which entered into force last month - provides stringent rules for providers of GPAI models, which will become effective in August 2025. Under the rules, the AI Office - a unit within the Commission - is encouraged to draw up a Code designed to ease application of the AI Act's rules for companies, including on transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures. Experts had until 25 August to apply for the role and those selected include Rishi Bommasani (US), the Society Lead at the Stanford Center for Research on Models, Marietje Schaake (Netherlands), a former MEP and now fellow at Stanford's Cyber Policy Center and at the Institute for Human-Centred AI, and Yoshua Bengio (Canada), known for his work in deep learning for which he received the 2018 A.M. Turing Award. Today, some 1,000 attendees, including general-purpose AI model providers, downstream providers, industry, civil society, academia, and independent experts, will take part in the first online plenary to help develop the Code, the Commission said. Last week, three EU lawmakers - Axel Voss (Germany/EPP), Svenja Hahn (Germany/Renew) and Kim van Sparrentak (The Netherlands/Greens-EFL) - sent a question for written answer to the Commission asking for clarity about the appointment process. They wanted to know how the EU executive is selecting chairs, and how they can deliver an adequate final Code, in light of the short timeline. The Commission has yet to provide an answer to those questions.
[3]
EU picks experts to steer AI compliance rules
LONDON (Reuters) - The European Union has picked a handful of artificial intelligence experts to decide how strictly businesses will have to comply with a raft of incoming regulations governing the technology. WHY IT'S IMPORTANT On Monday, the European Commission will convene the first plenary meeting of working groups -- made up of external experts -- tasked with drawing up the AI Act's "code of practice", which will spell out how exactly companies can comply with the wide-ranging set of laws. There are four working groups, focused on issues such as copyright and risk mitigation. Experts selected to oversee the groups include Canadian scientist and "AI godfather" Yoshua Bengio, former UK government policy adviser Nitarshan Rajkumar, and Marietje Schaake, a fellow at Stanford University's Cyber Policy Center. Big tech companies such as Google and Microsoft will be represented at the working groups, as will a number of nonprofit organisations and academic experts. While the code of practice will not be legally binding when it takes effect in 2024, it will provide firms with a checklist they can use to demonstrate their compliance. Any company claiming to follow the law while ignoring the code could face a legal challenge. CONTEXT AI companies are highly resistant to revealing the content their models have been trained on, describing the information as a trade secret that could give competitors an unfair advantage were it made public. While the AI Act's text says some companies will be obliged to provide detailed summaries of the data used to train their AI models, the code of practice is expected to make clearer just how detailed these summaries will need to be. One of the EU's four working groups will focus specifically on issues around transparency and copyright, and possibly result in companies effectively being forced to publish comprehensive datasets, leaving them vulnerable to untested legal challenges. In recent months, a number of prominent tech companies, including Google and OpenAI have faced lawsuits from creators claiming their content was improperly used to train their models. WHAT'S NEXT After Monday, the working groups will convene three more times before a final meeting in April, when they are expected to present the code of practice to the Commission. If accepted, companies' compliance efforts will be measured against the code of practice from August 2025.
[4]
EU picks experts to steer AI compliance rules
LONDON, Sept 30 (Reuters) - The European Union has picked a handful of artificial intelligence experts to decide how strictly businesses will have to comply with a raft of incoming regulations governing the technology. WHY IT'S IMPORTANT On Monday, the European Commission will convene the first plenary meeting of working groups -- made up of external experts -- tasked with drawing up the AI Act's "code of practice", which will spell out how exactly companies can comply with the wide-ranging set of laws. Advertisement · Scroll to continue There are four working groups, focused on issues such as copyright and risk mitigation. Experts selected to oversee the groups include Canadian scientist and "AI godfather" Yoshua Bengio, former UK government policy adviser Nitarshan Rajkumar, and Marietje Schaake, a fellow at Stanford University's Cyber Policy Center. Big tech companies such as Google (GOOGL.O), opens new tab and Microsoft (MSFT.O), opens new tab will be represented at the working groups, as will a number of nonprofit organisations and academic experts. Advertisement · Scroll to continue While the code of practice will not be legally binding when it takes effect in 2024, it will provide firms with a checklist they can use to demonstrate their compliance. Any company claiming to follow the law while ignoring the code could face a legal challenge. CONTEXT AI companies are highly resistant to revealing the content their models have been trained on, describing the information as a trade secret that could give competitors an unfair advantage were it made public. While the AI Act's text says some companies will be obliged to provide detailed summaries of the data used to train their AI models, the code of practice is expected to make clearer just how detailed these summaries will need to be. One of the EU's four working groups will focus specifically on issues around transparency and copyright, and possibly result in companies effectively being forced to publish comprehensive datasets, leaving them vulnerable to untested legal challenges. In recent months, a number of prominent tech companies, including Google and OpenAI have faced lawsuits from creators claiming their content was improperly used to train their models. WHAT'S NEXT After Monday, the working groups will convene three more times before a final meeting in April, when they are expected to present the code of practice to the Commission. If accepted, companies' compliance efforts will be measured against the code of practice from August 2025. Reporting by Martin Coulter Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
EU Picks Experts to Steer AI Compliance Rules
LONDON (Reuters) - The European Union has picked a handful of artificial intelligence experts to decide how strictly businesses will have to comply with a raft of incoming regulations governing the technology. WHY IT'S IMPORTANT On Monday, the European Commission will convene the first plenary meeting of working groups -- made up of external experts -- tasked with drawing up the AI Act's "code of practice", which will spell out how exactly companies can comply with the wide-ranging set of laws. There are four working groups, focused on issues such as copyright and risk mitigation. Experts selected to oversee the groups include Canadian scientist and "AI godfather" Yoshua Bengio, former UK government policy adviser Nitarshan Rajkumar, and Marietje Schaake, a fellow at Stanford University's Cyber Policy Center. Big tech companies such as Google and Microsoft will be represented at the working groups, as will a number of nonprofit organisations and academic experts. While the code of practice will not be legally binding when it takes effect in 2024, it will provide firms with a checklist they can use to demonstrate their compliance. Any company claiming to follow the law while ignoring the code could face a legal challenge. CONTEXT AI companies are highly resistant to revealing the content their models have been trained on, describing the information as a trade secret that could give competitors an unfair advantage were it made public. While the AI Act's text says some companies will be obliged to provide detailed summaries of the data used to train their AI models, the code of practice is expected to make clearer just how detailed these summaries will need to be. One of the EU's four working groups will focus specifically on issues around transparency and copyright, and possibly result in companies effectively being forced to publish comprehensive datasets, leaving them vulnerable to untested legal challenges. In recent months, a number of prominent tech companies, including Google and OpenAI have faced lawsuits from creators claiming their content was improperly used to train their models. WHAT'S NEXT After Monday, the working groups will convene three more times before a final meeting in April, when they are expected to present the code of practice to the Commission. If accepted, companies' compliance efforts will be measured against the code of practice from August 2025.
Share
Share
Copy Link
The European Commission has selected a panel of 13 international experts to develop a code of practice for generative AI. This initiative aims to guide AI companies in complying with the EU's upcoming AI Act.
The European Commission has taken a significant step towards regulating artificial intelligence by appointing a panel of 13 global experts to draft a code of practice for generative AI 1. This move comes as part of the EU's efforts to implement the forthcoming AI Act, which is expected to be the world's first comprehensive law on artificial intelligence 2.
The selected experts represent a diverse group of professionals from academia, civil society, and industry. Notable members include Yoshua Bengio, a pioneer in deep learning, and Lilian Edwards, a prominent figure in internet law 3. The panel also includes representatives from major tech companies such as Google's DeepMind and Microsoft 4.
The primary goal of this initiative is to create guidelines that will help AI companies comply with the EU's AI Act. The code of practice will focus on various aspects of AI development and deployment, including:
The expert panel is expected to present a first draft of the code by the end of 2024 5. This timeline aligns with the anticipated implementation of the AI Act, which is likely to come into effect in late 2025 or early 2026. The code of practice will serve as a voluntary measure for companies to demonstrate their commitment to responsible AI development and use.
The EU's proactive approach to AI regulation is expected to have far-reaching effects beyond Europe. Many industry observers believe that the guidelines developed by this expert panel could become a global standard for AI governance. Tech companies have shown a willingness to cooperate, recognizing the importance of addressing public concerns about AI safety and ethics.
As the panel begins its work, several challenges lie ahead:
The European Commission's initiative represents a significant milestone in the global effort to create a framework for responsible AI development and use. As the expert panel embarks on this crucial task, the world will be watching to see how their work shapes the future of AI governance.
Reference
[1]
[2]
[3]
[5]
The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.
3 Sources
Major technology companies are pushing for changes to the European Union's AI Act, aiming to reduce regulations on foundation models. This effort has sparked debate about balancing innovation with potential risks of AI technology.
9 Sources
Antitrust watchdogs from the US, UK, and EU have joined forces to address potential monopolistic practices in the rapidly evolving AI industry. This collaborative effort aims to ensure fair competition and prevent market dominance by tech giants.
6 Sources
LatticeFlow, in collaboration with ETH Zurich and INSAIT, has developed the first comprehensive technical interpretation of the EU AI Act for evaluating Large Language Models (LLMs), revealing compliance gaps in popular AI models.
12 Sources
Meta, Spotify, and other tech companies have voiced concerns over the European Union's proposed AI regulations, arguing that they could stifle innovation and hinder the AI boom. The debate highlights the tension between fostering technological advancement and ensuring ethical AI development.
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved