Curated by THEOUTPOST
On Fri, 20 Dec, 4:03 PM UTC
18 Sources
[1]
OpenAI Faces €15 Million Fine From Italy's Privacy Authority Over ChatGPT Data Collection Practices
OpenAI is growing its technology and extending into more spaces in an attempt to reach a wider user base. It is also working towards transitioning into a fully for-profit organization, which is inviting legal trouble from Elon Musk. With AI advancement, there is a growing need to keep an eye on the ethical aspect of using the technology and ensure that users' privacy is intact and that no data violations occur. As the company keeps on rapidly expanding, it is facing immense pressure from authorities to adhere to laws and regulations. Recently, it has been in hot waters by Italy's authorities and facing hefty fines on account of a data collection breach. Italy's data protection authority, Garante, is said to have levied a penalty on OpenAI of 15 million euros, which is approximately $15.58 million, after carrying out an investigation into the training process for ChatGPT. As per the authorities. OpenAI failed to be transparent with users about using personal data for training ChatGPT and lacked a legal basis for it. The investigation also determined that there was a weak age verification process to protect young children from accessing inappropriate AI content. The investigators claimed that OpenAI also did not inform about a security breach that occurred in March 2023. Its training process also stood in violation of the European Union's General Data Protection (GDPR) law. Other than a hefty penalty, OpenAI has also been ordered by the authorities to use different communication outlets to explain to the public how ChatGPT works so people have a better understanding of it and for the campaign to extend over the course of six months. The communication would include information regarding its data collection method, training model, rights of users, and data access. This process would allow both users and nonusers of ChatGPT to be more equipped with the workings of the models and be in a better position to exercise their rights regarding the use of personal data. OpenAI has responded to the decision and called it disproportionate, claiming the fine exceeded the revenue made in Italy in the time period and its intention to file for an appeal. OpenAI further expressed its commitment to bringing forward AI solutions that ensure user privacy rights are respected. While OpenAI does pledge to adhere to data protection laws, the fine imposed does highlight the ongoing tensions between companies advancing in AI technology and regulatory authorities.
[2]
Italy slaps OpenAI with a €15M fine over GDPR breach in ChatGPT
Italy fined OpenAI €15 million ($15.66 million) for violations of personal data privacy in its ChatGPT application according to Reuters. The Italian data protection authority, Garante, concluded that OpenAI processed user data unlawfully and failed to ensure adequate age verification. The fine, stemming from a 2023 investigation, emphasizes the seriousness of data privacy compliance under EU regulations. The penalty follows the Garante's investigation, which revealed OpenAI's processing of user personal data without a sufficient legal basis. Additionally, the company did not uphold transparency principles as required by the EU's General Data Protection Regulation (GDPR). OpenAI's previous failure to report a security breach in March 2023 also contributed to this decision. "Inadequate mechanisms for age verification" heighten the risk of exposing children under 13 to inappropriate AI-generated content, the Garante noted. In response to the fine, OpenAI criticized the ruling as "disproportionate" and announced plans to appeal. The company argues that the penalty is nearly 20 times its revenue in Italy during the investigation period. As part of the ruling, OpenAI is mandated to conduct a six-month awareness campaign across various media outlets to explain how ChatGPT functions. This campaign will specifically address data collection practices, including how both user and non-user data is utilized for training algorithms, and outline users' rights to object, rectify, or delete their personal information. ChatGPT search is now free and it's coming for Google Italy's proactive regulatory stance marks it as one of the EU's leading authorities in enforcing compliance with data privacy rules. This fine is not the first action taken against OpenAI; the Garante temporarily banned ChatGPT in March 2023 over similar concerns before access was restored when OpenAI addressed the issues related to user consent for data usage. OpenAI defended its practices, emphasizing its commitment to privacy and asserting that the ruling undermines Italy's ambitions in artificial intelligence. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson told Associated Press. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." The Garante assessed OpenAI's fine size considering the company's cooperative approach during the investigation, suggesting that the penalty could have been significantly higher. In conjunction with the fine, further guidelines from the European Data Protection Board (EDPB) clarify the implications of unauthorized personal data processing in AI models. It states that if anonymization occurs prior to any operational phase of the AI model, then GDPR violations may not apply to that model's subsequent operations. However, if personal data is reprocessed after anonymization, GDPR does apply.
[3]
Italy fines OpenAI €15 million after ChatGPT data privacy probe
The Italian Data Protection Authority found OpenAI used personal data to train their AI without "an adequate legal basis" for doing so. Italy's data protection watchdog has fined OpenAI €15 million after completing a personal data collection probe into the company's artificial intelligence chatbot, ChatGPT. The Italian Data Protection Authority (Garante) said OpenAI used personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI also didn't provide an "adequate age verification system," to prevent users under 13 years old from being exposed to inappropriate AI-generated content, the investigation continued. The Italian authority is asking OpenAI to launch a six-month campaign in local media to raise awareness on how the company collects personal data. "ChatGPT users and non-users should be made aware of how to oppose the training of generative artificial intelligence with their personal data and, therefore, be effectively placed in the position to exercise their rights under the General Data Protection Regulations (GDPR)," the report read. OpenAI brands decision 'disproportionate' Garante had previously put a temporary block on ChatGPT from being used due to privacy concerns while the authority investigated a possible data breach in 2023. In a emailed statement, OpenAI dubbed the decision "disproportionate" and said it will appeal. An OpenAI spokesperson said the fine was "nearly 20 times" the revenue it made in Italy during the same year. The company did add that it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights". Regulators in the US and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence.
[4]
Italy Fines OpenAI €15 Million for ChatGPT GDPR Data Privacy Violations
Italy's data protection authority has fined ChatGPT maker OpenAI a fine of €15 million ($15.66 million) over how the generative artificial intelligence application handles personal data. The fine comes nearly a year after the Garante found that ChatGPT processed users' information to train its service in violation of the European Union's General Data Protection Regulation (GDPR). The authority said OpenAI did not notify it of a security breach that took place in March 2023, and that it processed the personal information of users to train ChatGPT without having an adequate legal basis to do so. It also accused the company of going against the principle of transparency and related information obligations toward users. "Furthermore, OpenAI has not provided for mechanisms for age verification, which could lead to the risk of exposing children under 13 to inappropriate responses with respect to their degree of development and self-awareness," the Garante said. Besides levying a €15 million fine, the company has been ordered to carry out a six-month-long communication campaign on radio, television, newspapers, and the internet to promote public understanding of how ChatGPT works. This specifically includes the nature of data collected, both user and non-user information, for the purpose of training its models, and the rights that users can exercise to object, rectify, or delete that data. "Through this communication campaign, users and non-users of ChatGPT will have to be made aware of how to oppose generative artificial intelligence being trained with their personal data and thus be effectively enabled to exercise their rights under the GDPR," the Garante added. Italy was the first country to impose a temporary ban on ChatGPT in late March 2023, citing data protection concerns. Nearly a month later, access to ChatGPT was reinstated after the company addressed the issues raised by the Garante. In a statement shared with the Associated Press, OpenAI called the decision disproportionate and that it intends to appeal, stating the fine is nearly 20 times the revenue it made in Italy during the time period. It further said it's committed to offering beneficial artificial intelligence that abides by users' privacy rights. The ruling also follows an opinion from the European Data Protection Board (EDPB) that an AI model that unlawfully processes personal data but is subsequently anonymized prior to deployment does not constitute a violation of GDPR. "If it can be demonstrated that the subsequent operation of the AI model does not entail the processing of personal data, the EDPB considers that the GDPR would not apply," the Board said. "Hence, the unlawfulness of the initial processing should not impact the subsequent operation of the model." "Further, the EDPB considers that, when controllers subsequently process personal data collected during the deployment phase, after the model has been anonymised, the GDPR would apply in relation to these processing operations." Earlier this month, the Board also published guidelines on handling data transfers outside non-European countries in a manner that complies with GDPR. The guidelines are subject to public consultation until January 27, 2025. "Judgements or decisions from third countries authorities cannot automatically be recognised or enforced in Europe," it said. "If an organisation replies to a request for personal data from a third country authority, this data flow constitutes a transfer and the GDPR applies."
[5]
Italy's privacy watchdog fines OpenAI for ChatGPT violations over personal data | BreakingNews.ie
Italy's data protection watchdog said it has fined OpenAI 15 million euros (£12.4 million) after completing an investigation into the collection of personal data by the US artificial intelligence (AI) company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its probe showed OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI called the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said in a statement. "They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights". The investigation, launched last year, also found that OpenAI did not provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative AI systems such as ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the US and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence.
[6]
Italy's privacy watchdog fines OpenAI for ChatGPT's violations in collecting users personal data
Italy's data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence. © 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[7]
Italy's privacy watchdog fines OpenAI for ChatGPT's violations in collecting users personal data
ROME -- Italy's data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence. ___ AP Business Writer Kelvin Chan in London contributed to this report
[8]
Italy's privacy watchdog fines OpenAI for ChatGPT's violations in collecting users personal data
ROME (AP) -- Italy's data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence. AP Business Writer Kelvin Chan in London contributed to this report
[9]
Italy's privacy watchdog fines OpenAI for ChatGPT's violations in collecting users personal data
ROME (AP) -- Italy's data protection watchdog said Friday it has fined OpenAI EUR15 million (USD15.6 million) after wrapping up a probe into the collection of personal data by the US artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic.
[10]
Italy's privacy watchdog fines OpenAI for ChatGPT's violations in collecting users personal data
ROME (AP) -- Italy's data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence. ___ AP Business Writer Kelvin Chan in London contributed to this report
[11]
Italy fines OpenAI 15 million euros over privacy rules breach
MILAN (Reuters) - Italy's privacy watchdog said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application. The authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime. The Garante said it found OpenAI processed users' personal data "to train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI had no immediate comment on Friday. It has previously said it believes its practices are aligned with the European Union's privacy laws. Last year the Italian watchdog briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules. The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train algorithms. (Reporting by Alessia Pe and Elvira Pollina; Editing by Alvise Armellini and Frances Kerry)
[12]
Italy Tells OpenAI to Explain Personal Data Use, Fines €15M
Italy's data protection authority (DPA) Garante Per La Protezione Dei Dati Personali has fined OpenAI 15 million euros for processing users' personal data to train ChatGPT without "identifying an appropriate legal basis" and violating transparency and related information obligations toward users. The DPA has instructed OpenAI to launch a six-month information campaign via radio, television, newspapers, and internet, to increase public understanding of how ChatGPT functions. This comprises disclosing information about the collection of user and non-user data for training its generative AI chatbot and the rights of data subjects (citizens) like the rights to object, rectify, and delete their data. The fine was levied for breaking privacy laws under the General Data Protection Regulation (GDPR) of the European Union (EU), which Italy enacted by amending the Italian Personal Data Protection Code. The Italian regulator's decision follows the European Data Protection Board's (EDPB) recent opinion on the use of personal data for the development and deployment of AI models. Responding to the DPA's decision, OpenAI labelled the decision "disproportionate", explaining that the fine amounts to 20 times the revenue it generated in Italy during the corresponding period, Associated Press reported. In March 2023, the DPA opened an investigation into ChatGPT, imposing a temporary ban on OpenAI's processing of Italians' user data without any legal basis. Additionally, the regulator noted that the service's lack of age verification procedures exposed children to "inappropriate" responses. This contradicted OpenAI's terms of service which maintained that ChatGPT is "allegedly addressed" to 13+ users. Besides this, the regulator notified a data breach of subscriber conversations and payment information. The DPA later instructed OpenAI to notify it about the company's compliance measures within 20 days, barring which a fine of 20 million euros would be levied on them. Later, in May 2023, after initiating several changes within its functioning, ChatGPT was reinstated in Italy. Among other changes, OpenAI introduced online forms for EU users to opt out of the data collection process, and allowed users to disable their chat history on ChatGPT. Responding to DPA's allegation of the company lacking age verification mechanisms, OpenAI also introduced a system enabling Italian users to provide their birth date to block users under 13 and to request parental permission for users under 18. In January 2024, the DPA notified breaches of the data protection law to OpenAI and provided the platform 30 days to submit its counterclaims concerning the alleged breaches. Besides Italy, countries like Poland, Japan, and Canada have also probed OpenAI for the alleged collection of individuals' sensitive data. In March 2024, the DPA launched another investigation into OpenAI about the company's text-to-video AI tool Sora. The regulator sought clarification based on whether user data sought from Italian citizens would be used in training Sora in the EU and whether the methods for informing users about such data processing comply with European regulations. Earlier in February 2023, the DPA imposed a temporary ban on the AI-powered chatbot Replika, blocking the service from processing the personal data of Italian users. The service posed factual and inappropriate risks to children and violated the EU's data protection norms, like ensuring transparency, the regulator claimed. Besides this, in March 2022, the DPA levied a fine of 20 million euros on Clearview AI for employing biometric data and monitoring of Italian citizens. OpenAI's training of ChatGPT with users' personal data infringes on fundamental rights and freedoms, the EDPB disclosed in its preliminary report on OpenAI's compliance with the GDPR. To explain, data processing often involves 'web scraping,' which extracts information from publicly available internet sources, sometimes including sensitive user data. Additionally, as usage of personal data training AI increases, concerns regarding potential data breaches and monitoring, prediction, and analysis of human behaviour also rise. Further, AI models trained on biased data risk reinforcing existing prejudices and discrimination. For instance, Microsoft's AI-enabled chatbot Tay began spouting racist and misogynistic tweets after learning and receiving information through X interactions. Besides the risk of using personal data in AI training, the utilisation of non-personal data for similar purposes poses a host of concerns. To explain, among other risks, the risk of deanonymization or the identification of individuals from data used to train AI models rises as the line between personal and non-personal information in large datasets becomes increasingly hazy. Overall, countries have been aiming to safeguard their citizens' privacy and copyrights, with global calls for AI regulation amid these concerns posed by employing user data in AI training.
[13]
Italy's Privacy Watchdog Fines OpenAI for ChatGPT's Violations in Collecting Users Personal Data
ROME (AP) -- Italy's data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT. The country's privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users' personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI dubbed the decision "disproportionate" and said it will appeal. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," an OpenAI spokesperson said Friday in an emailed statement. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period." OpenAI added, however, it remained "committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights." The investigation, launched last year, also found that OpenAI didn't provide an "adequate age verification system" to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said. The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection. The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic. Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence. ___ AP Business Writer Kelvin Chan in London contributed to this report Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[14]
Italy fines OpenAI $15M over data protection, privacy breaches
Italy's data protection agency has fined OpenAI $15.7 million (15 million euros) and ordered the ChatGPT maker to launch a six-month public awareness campaign after a data collection probe of the firm's flagship artificial intelligence model. The Italian Data Protection Authority (IDPA), also known as the Garante, said in a Dec. 20 statement that its investigation found that OpenAI did not notify the agency about a data breach in March 2023. The watchdog said OpenAI also "processed users' personal data" to train Chatbot without first identifying an "adequate legal basis" for the action, violating the "principle of transparency and the related information obligations towards users." The IDPA said its investigation found that OpenAI didn't have adequate age verification mechanisms to prevent underage users from using its services. "Furthermore, OpenAI has not provided mechanisms for age verification, with the consequent risk of exposing minors under 13 to responses that are unsuitable for their level of development and self-awareness," the IDPA said. As part of its corrective and sanctioning measure, the IDPA has ordered OpenAI to conduct a six-month public awareness campaign across radio, television, newspapers, and the internet, promoting "public understanding and awareness of the functioning of ChatGPT." "In particular on the collection of data from users and non-users for the training of generative artificial intelligence and the rights exercisable by the interested parties, including those of opposition, rectification and cancellation," the IDPA said. After the campaign concludes, the IDPA said users should be aware of how to oppose the training of generative AI with their data and exercise their rights under the European Union's General Data Protection Regulation (GDPR). Companies that violate the GDPR can be fined up to $20 million or 4% of their global turnover. According to the IDPA, OpenAI's "collaborative attitude" during the investigation contributed to a reduction in the size of the fine. During the investigation, OpenAI moved its European headquarters to Ireland. The IDPA said its Irish counterpart, the Irish Data Protection Authority (DPC), has become the lead supervisory authority in continuing any investigations. Related: OpenAI apologizes for outage after Apple update combines Siri and ChatGPT The IDPA's investigation began in March 2023, and the agency said it reached its conclusion after considering the Dec. 18 European Data Protection Board (EDPB) opinion on using personal data to develop and deploy AI models. In March 2023, Italy became the first Western country to temporarily block ChatGPT over privacy concerns, and the IDPA announced an investigation into suspected breaches of data privacy rules. Regulators in Italy were criticized for their ChatGPT ban. A few weeks after the initial clampdown, they said the ban would be lifted provided OpenAI met several transparency measures. On April 29, the AI chatbot was once again available in Italy. OpenAI did not immediately respond to a request for comment. Magazine: BTC correction 'almost done,' Hailey Welch speaks out, and more: Hodler's Digest, Dec. 15 - 21
[15]
Italy fines OpenAI EUR15 million after ChatGPT probe
ROME (AFP) - Italy's data protection authority on Friday said it had fined OpenAI EUR15 million over the use of personal data by ChatGPT, but the United States (US) tech firm said it would appeal. The Italian Data Protection Authority (GPDP) watchdog began an investigation in March 2023 when it temporarily blocked ChatGPT in Italy over privacy concerns, becoming the first Western country to take action against the popular AI chatbot. Announcing the conclusion of its probe on Friday, the GPDP said it had "imposed a fine of EUR15 million on OpenAI, which was also calculated taking into account the company's cooperative attitude". It said OpenAI "did not notify the authority of the data breach it underwent in March 2023, it has processed users' personal data to train ChatGPT without first identifying an appropriate legal basis and has violated the principle of transparency and the related information obligations toward users". In addition, OpenAI "has not provided for mechanisms for age verification, which could lead to the risk of exposing children under 13 to inappropriate responses", it added. As well as the fine, the authority said it had ordered OpenAI to carry out a six-month broadcast, print and Internet campaign to promote public understanding of ChatGPT. In its own statement, OpenAI said the decision was "disproportionate" and said it would appeal. It noted how it had worked with the Italian authority after ChatGPT was suspended to secure its reinstatement after a month. "They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period," it said. "We believe the Garante's (GDPD) approach undermines Italy's AI ambitions, but we remain committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights."
[16]
Italy fines OpenAI 15 million euros over privacy rules breach
MILAN, Dec 20 (Reuters) - Italy's privacy watchdog said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application. The authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime. The Garante said it found OpenAI processed users' personal data "to train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". OpenAI had no immediate comment on Friday. It has previously said it believes its practices are aligned with the European Union's privacy laws. Last year the Italian watchdog briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules. The service was reactivated after Microsoft-backed (MSFT.O), opens new tab OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train algorithms. ($1 = 0.9626 euros) Reporting by Alessia Pe and Elvira Pollina Editing by Alvise Armellini and Frances Kerry Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Boards, Policy & Regulation
[17]
Italy's Privacy Regulator Wishes OpenAI "Merry Christmas" With A €15 Million Fine
After more than a year of investigations, the Italian privacy regulator - il Garante per la protezione dei dati personali - issued a €15 million fine against OpenAI for violating privacy rules. Violations include lack of appropriate legal basis for collecting and processing the personal data used for training their genAI models, lack of adequate information to users about the collection and use of their personal data, and lack of measures for collecting children's data lawfully. The regulator also required OpenAI to engage in a campaign to inform users about the way the company uses their data and how the technology works. OpenAI announced that they will appeal the decision. This action obviously impacts OpenAI and other genAI providers, but the most significant long-term impact will be on companies that use genAI models and systems from OpenAI and its competitors -- and that group likely includes your company. So here's what to do about it: Job #1: Obsess about third party risk management Using technology that is built without due regard for the protection and the fair use of personal data poses significant regulatory and ethical questions. It also increases the risk of privacy violations in the information generated by the model itself. Organizations understand the challenge: in Forrester's surveys, decision-makers consistently list privacy concerns as a top barrier for the adoption of genAI in their firms. However, there is more on the horizon: the EU AI Act, the first comprehensive and binding set of rules for governing AI risks, establishes a range of obligations for AI and genAI providers and for companies using those technologies. By August 2025, General-purpose AI (GPAI) models and systems providers must comply with specific requirements such as sharing with users a list of the sources they used for training their models, results of testing, copyright policies, and providing instructions about the correct implementation and expect behaviour of the technology. Users of the technology must ensure they vet their third parties carefully and collect all the relevant information and instructions to meet their own regulatory requirements. They should include both genAI providers and technology providers that have embedded genAI in their tools in this effort. This means: 1) carefully mapping technology providers that leverage genAI; 2) reviewing contracts to account for the effective use of genAI in the organization; and 3) designing a multi-faceted third party risk management process that captures critical aspects of compliance and risk management, including technical controls. Job #2: Prepare for deeper privacy oversight From a privacy perspective, companies using genAI models and systems must prepare to answer some difficult questions that touch on the use of personal data in genAI models that runs much deeper than just training data. Regulators might soon ask questions about companies' ability to respect users' privacy rights, such as data deletion (aka, "the right to be forgotten"), data access and rectification, consent, transparency requirements, and other key privacy principles such as data minimization and purpose limitation. Regulators recommend that companies use anonymization and privacy preserving technologies like synthetic data when training and fine tuning models. Firms must also: 1) evolve data protection impact assessments to cater for traditional and emerging AI privacy risks; 2) ensure they understand and govern structured and unstructured data accurately and efficiently to be able to enforce data subject rights (among other things) at all stages of model development and deployments; and 3) carefully assess the legal basis for using customers' and employees' personal data in their genAI projects and update their consent and transparency notices appropriately. Forrester can help: Here's what to read, and if you have questions, let's talk! If you have questions about this topic, the EU AI Act, or the governance of personal data in the context of your AI and genAI projects, read my research -- How To Approach The EU AI Act and A Privacy Primer On Generative AI Governance - and schedule a guidance session with me. I would love to talk to you.
[18]
Italian Authority Fines OpenAI $15.6 Million for Alleged GDPR Violations | PYMNTS.com
OpenAI said Friday (Dec. 20) that it will appeal a fine imposed by an Italian authority that alleged that the company violated the European Union's General Data Protection Regulation (GDPR). Italy's data protection agency, the Garante, fined OpenAI 15 million euros (about $15.6 million), saying it found that the company used personal data to train ChatGPT without informing consumers, Reuters reported Friday (Dec. 20). The agency also found that OpenAI did not have an adequate age verification system in place to prevent children from being exposed to inappropriate content, according to the report. The Garante said that OpenAI cooperated with its investigation, and that its cooperation was taken into account when determining the amount of the fine, per the report. Under the GDPR, companies can be fined up to 20 million euros (about $20.9 million) or 4% of their global turnover, the report said. In addition to imposing the fine, the agency ordered OpenAI to begin a six-month campaign on Italian media to raise awareness about how the company collects data to train algorithms, per the report. Reached by PYMNTS, an OpenAI spokesperson said in an emailed statement that the Garante's decision is "disproportionate" and that the company will appeal. "We believe the Garante's approach undermines Italy's AI ambitions, but we remain committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights," the statement said. In an earlier action, in March 2023, Italy became the first Western country to outlaw ChatGPT after the Garante announced an investigation of the chatbot's alleged breach of GDPR privacy rules and age-verification practices. About a month later, in late April 2023, OpenAI said ChatGPT was again available in Italy after the company fulfilled the demands of the country's data protection authority. The OpenAI spokesperson referred to this incident in the Friday statement. "When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later," the statement said. "They've since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period." In another, separate action involving the GDPR, the Dutch Data Protection Authority said Wednesday (Dec. 18) that it fined Netflix 4.75 million euros (about $4.95 million). The regulator said the company did not give its customers enough information about what it does with their personal data. Netflix said it objected to the decision.
Share
Share
Copy Link
Italy's data protection authority fines OpenAI €15 million for GDPR violations related to ChatGPT's data collection and processing practices, highlighting growing tensions between AI advancement and regulatory compliance.
Italy's data protection authority, Garante, has levied a €15 million ($15.66 million) fine against OpenAI for violations of personal data privacy in its ChatGPT application 1. The penalty comes after a thorough investigation into ChatGPT's training process and data collection practices, highlighting the growing tensions between AI advancement and regulatory compliance.
The Garante's investigation revealed several breaches of the European Union's General Data Protection Regulation (GDPR):
In addition to the fine, Garante has ordered OpenAI to conduct a six-month awareness campaign across various media outlets. This campaign aims to explain ChatGPT's functionality, data collection methods, and users' rights regarding personal data 2.
OpenAI has criticized the ruling as "disproportionate" and announced plans to appeal. The company argues that the fine is nearly 20 times its revenue in Italy during the investigation period 5. Despite this, OpenAI has expressed its commitment to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights.
This case underscores the challenges faced by AI companies in navigating complex data protection regulations:
As AI technology continues to advance rapidly, this case highlights the ongoing challenge of balancing innovation with data protection and privacy concerns. It also serves as a reminder of the importance of compliance with data protection regulations for companies operating in the AI space.
Reference
[1]
[3]
[4]
Italy's data protection watchdog, the Garante, has ordered the blocking of DeepSeek, a Chinese AI chatbot, due to insufficient information on its data handling practices. This action highlights the growing scrutiny of AI technologies in Europe.
3 Sources
3 Sources
Italy's data protection authority cautions GEDI, a major Italian publisher, about potential EU regulation violations if it shares personal data archives with OpenAI as part of their strategic partnership.
2 Sources
2 Sources
OpenAI introduces data residency options in Europe for its AI services, allowing European customers to store and process data within the EU to comply with local data sovereignty requirements.
2 Sources
2 Sources
Google's AI model, PaLM 2, is under investigation by EU regulators for potential privacy violations. The Irish Data Protection Commission is leading the inquiry, focusing on data collection and usage practices.
32 Sources
32 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved