Curated by THEOUTPOST
On Fri, 20 Sept, 4:04 PM UTC
9 Sources
[1]
EU AI Act Under Fire: Tech Giants Lobby for Changes
In answer to that, questions have bubbled up about data usage, particularly about copyrighted works. The AI Act will oblige companies to explain with greater detail what kind of data they are using for training. This requirement has opened up a battle between protecting trade secrets and the rights of copyright holders. A lack of transparency about its data sources has been one of the criticisms leveled against OpenAI, whereas Google and Amazon have openly declared that they are willing participants in the working groups writing the code. Leaders in the industry are now arguing that more competencies are needed and also more manageable responsibilities in the regulation, particularly for start-ups. Hence, between calls for more balance between regulation and competitiveness in the EU, smaller firms would demand exemptions that fit their needs that are different from the others.
[2]
Analysis-Tech Giants Push to Dilute Europe's AI Act
LONDON (Reuters) - The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta. "If it's too narrow or too specific, that will become very difficult," he added. DATA SCRAPING Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. BIG BUSINESS AND PRIORITIES Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." (Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis)
[3]
Analysis-Tech giants push to dilute Europe's AI Act
The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly.The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta. "If it's too narrow or too specific, that will become very difficult," he added. Data scraping Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. Big business and priorities Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates."
[4]
Tech giants push to dilute Europe's AI Act
LONDON (Reuters) - The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta. "If it's too narrow or too specific, that will become very difficult," he added. DATA SCRAPING Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. BIG BUSINESS AND PRIORITIES Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." (Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis)
[5]
Analysis-Tech giants push to dilute Europe's AI Act
But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta. "If it's too narrow or too specific, that will become very difficult," he added. Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." (Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis)
[6]
Tech giants push to dilute Europe's AI Act
LONDON - The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta . "If it's too narrow or too specific, that will become very difficult," he added. DATA SCRAPING Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. BIG BUSINESS AND PRIORITIES Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." (Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis)
[7]
Tech giants push to dilute Europe's AI Act
LONDON, Sept 20 (Reuters) - The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. Advertisement · Scroll to continue But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. Advertisement · Scroll to continue The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon (AMZN.O), opens new tab, Google (GOOGL.O), opens new tab, and Meta (META.O), opens new tab. "If it's too narrow or too specific, that will become very difficult," he added. DATA SCRAPING Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. BIG BUSINESS AND PRIORITIES Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." Reporting by Martin Coulter; Editing by Matt Scuffham and Barbara Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab
[8]
Tech giants push to dilute Europe's AI Act
The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta . "If it's too narrow or too specific, that will become very difficult," he added. Data Scraping Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. Big business and priorities Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton - a vocal champion of EU regulation and critic of non-compliant tech companies - this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." Published - September 20, 2024 05:27 pm IST Read Comments
[9]
Tech giants embark on final push to dilute European Union's AI Act
The world's biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines. EU lawmakers in May agreed the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups. Click here to connect with us on WhatsApp But until the law's accompanying codes of practice have been finalised, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face. The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly. The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge. "The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta . More From This Section For Sri Lanka's ethnic minority Tamils, election does not offer hope Nippon Steel to sell $211 mn in assets to manage debt amid US Steel deal 6 personnel killed, 11 injured in terror attack in Pak's Khyber Pakhtunkhwa Huawei's $2,800 tri-fold phone hits stores as people face supply concerns NASA, SpaceX to launch new crewed mission next week to ISS for 203 days "If it's too narrow or too specific, that will become very difficult," he added. Data scraping Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright. Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts. Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission. OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named. Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds". Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency". "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said. Big business and priorities Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States. Thierry Breton "a vocal champion of EU regulation and critic of non-compliant tech companies" this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc's executive arm. Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organisations representing smaller tech companies. Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organisations, including Access Now, the Future of Life Institute, and Mozilla have also applied to help draft the code. Gahntz said: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates." (Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.) Also Read Meta shuts CrowdTangle: Why it's bad for fight against misinformation Asian stocks subdued after lacklustre earnings from US tech; yen firms Need democratisation of tech for mass usage: PM Modi at G7 Outreach summit Austrian group accuses Google of tracking users on Chrome browser Nvidia's Q1 result preview: Know street expectation, stock strategy here
Share
Share
Copy Link
Major technology companies are pushing for changes to the European Union's AI Act, aiming to reduce regulations on foundation models. This effort has sparked debate about balancing innovation with potential risks of AI technology.
As the European Union (EU) prepares to implement its groundbreaking AI Act, major technology companies are intensifying their efforts to influence the legislation. The AI Act, set to be the world's first comprehensive law on artificial intelligence, has become a focal point for lobbying by tech giants who seek to dilute its stringent regulations 1.
At the heart of the debate are foundation models, which form the basis for various AI applications. Companies like Google, Microsoft, and OpenAI are advocating for less restrictive rules on these models, arguing that over-regulation could stifle innovation and competitiveness 2. They propose a tiered approach to regulation, with stricter rules applied only to the riskiest AI applications.
Critics, including some EU officials and civil society groups, worry that loosening regulations on foundation models could undermine the Act's core principles of transparency and accountability. They argue that these models, being the building blocks of AI systems, should be subject to rigorous oversight to prevent potential misuse and ensure public safety 3.
The tech industry's lobbying efforts are not just about regulatory compliance; they also reflect significant economic interests. With the global AI market projected to reach $190 billion by 2025, companies are keen to secure their positions in this lucrative field 4. The EU's regulations could set a global precedent, potentially influencing AI governance worldwide.
The push for changes to the AI Act has created a complex political landscape within the EU. While some member states are receptive to the tech industry's arguments, others remain committed to stringent regulations. The European Parliament and the European Council are engaged in ongoing negotiations to finalize the Act's details 5.
As discussions continue, the central challenge remains finding a balance between fostering innovation and ensuring adequate safeguards. The outcome of these negotiations will likely shape the future of AI development and deployment not only in Europe but potentially across the globe, as other jurisdictions look to the EU's approach as a possible model for their own AI regulations.
Reference
[1]
[2]
[3]
[4]
Meta, Spotify, and other tech companies have voiced concerns over the European Union's proposed AI regulations, arguing that they could stifle innovation and hinder the AI boom. The debate highlights the tension between fostering technological advancement and ensuring ethical AI development.
9 Sources
The European Union's AI Act, a risk-based rulebook for artificial intelligence, is nearing implementation with the release of draft guidelines for general-purpose AI models. This landmark legislation aims to foster innovation while ensuring AI remains human-centered and trustworthy.
3 Sources
Antitrust watchdogs from the US, UK, and EU have joined forces to address potential monopolistic practices in the rapidly evolving AI industry. This collaborative effort aims to ensure fair competition and prevent market dominance by tech giants.
6 Sources
The European Commission has selected a panel of 13 international experts to develop a code of practice for generative AI. This initiative aims to guide AI companies in complying with the EU's upcoming AI Act.
5 Sources
Meta Platforms has announced a delay in launching its latest AI models in the European Union, citing concerns over unclear regulations. This decision highlights the growing tension between technological innovation and regulatory compliance in the AI sector.
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved