2 Sources
[1]
Italy's antitrust authority closes probes into DeepSeek, Mistral, and Nova AI
The AGCM accepted binding commitments from all three chatbot providers, establishing a concrete benchmark for what 'adequate' hallucination transparency must look like in practice, and a 120-day compliance window before potential fines. Italy's competition and consumer protection authority, the AGCM, has closed its investigations into three AI chatbot providers, China's DeepSeek, France's Mistral AI, and Turkey's Scaleup Yazilim (operator of Nova AI), after each company agreed to binding commitments designed to improve how users are warned about the risk of AI hallucinations. The closures were published in the AGCM's official bulletin. The three cases, PS12942 (DeepSeek), PS12968 (Mistral Le Chat), and PS12973 (Nova AI), were each opened on the basis that the companies' AI chatbots had failed to inform users clearly, immediately, and intelligibly that their AI systems could generate inaccurate, misleading, or entirely fabricated content. That failure, in the AGCM's view, constituted a potentially unfair commercial practice under Articles 20, 21, and 22 of Italy's Consumer Code, because it prevented users from making informed decisions about whether to use the services, particularly in high-stakes areas such as health, finance, and law, where overreliance on AI outputs could cause direct harm. None of the three cases resulted in a formal finding of infringement or a fine. All three were resolved through the commitment mechanism available under Article 27(7) of the Consumer Code, under which companies propose remedies the authority deems sufficient to address its concerns. The AGCM accepted those proposals. Non-compliance with the commitments within a 120-day window, however, would reopen the cases and expose each company to fines of up to approximately $11.6 million. The commitments differ by company, reflecting the specific transparency failures identified in each case. DeepSeek, operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, agreed to the broadest package: prominent warnings about hallucination risk added directly to its chat interfaces and website in Italian, a full Italian-language translation of relevant disclosures, internal compliance training workshops, and, unusually, an active technical commitment to invest in reducing hallucination rates. The AGCM explicitly acknowledged that current technology cannot eliminate hallucinations entirely, making DeepSeek's technical commitment a forward-looking obligation rather than a present-state claim. DeepSeek also agreed to submit a full compliance report to the AGCM within the 120-day deadline. For Mistral's Le Chat, the French AI company's commitments developed along four lines under AGCM decision No. 31864: the inclusion of in-chat disclaimers (specifically phrasing to the effect of 'Le Chat may make mistakes. Please check responses'); strengthening and Italian localisation of its terms of service with explicit reference to the potential unreliability of outputs; improved accessibility of those terms throughout the user journey, including homepage, login, registration, app store pages, and the chat interface itself; and a full Italian translation of its website and help centre. The AGCM's emphasis was on what it called 'contextual' transparency: users must be warned at the moment and place where risk materialises, not merely in terms and conditions buried at the end of a sign-up flow. For Nova AI, operated by Scaleup Yazilim Hizmetleri, the commitments addressed two distinct transparency failures. The first was the same as in the other cases: warnings about hallucination risk had been absent from the chat interface. The second was specific to Nova AI's product architecture: the service is a cross-platform aggregator that provides a single interface for accessing multiple underlying AI models including ChatGPT, Gemini, Claude, and DeepSeek, but this was not made clear to users, who may have believed they were interacting with a single, proprietary AI. Scaleup committed to making its aggregator character explicit, including disclosing that it does not itself aggregate or process the responses from the underlying models, alongside the standard hallucination disclosure requirements. The AGCM's three-case sweep is the first time a European regulator has extracted binding, specific commitments from AI companies on hallucination disclosure as a consumer protection obligation, and the first to do so simultaneously across companies from three different jurisdictions (China, France, Turkey), applying the same standard to all. The conceptual framework Italy has established is transferable. The argument is simple: if a consumer product can cause harm through user overreliance on its outputs, then informing users of that risk at the point of use is a basic consumer protection obligation, not optional transparency. The AGCM has been among Europe's most aggressive regulators in the AI consumer protection space. Alongside the hallucination probes, the authority launched a separate abuse of dominance investigation in July 2025 into Meta's integration of Meta AI into WhatsApp (case A576), imposing interim measures in December 2025 to suspend WhatsApp Business Solution terms that blocked rival AI assistants from the platform. The European Commission opened its own antitrust case into Meta's WhatsApp AI integration in December 2025. Italy is consistently moving faster than Brussels. The practical standard Italy has now articulated through these commitments, that hallucination warnings must be contextual, meaning present in the chat interface at the moment of use rather than buried in terms of service, is likely to inform how other EU regulators, and eventually the European Commission under the AI Act's transparency obligations for general-purpose AI, approach the same question. Article 13 of the AI Act requires providers of general-purpose AI models to provide adequate information about capabilities and limitations. The AGCM's consumer code enforcement arrives first and sets a concrete precedent for what 'adequate' means in practice. For AI companies operating in Europe, the message is clear: a disclaimer in the terms of service no longer satisfies the obligation. The warning must be where the user is, at the moment the risk is live.
[2]
Italy closes antitrust probes into AI firms after commitments on 'hallucination' risks
Italy's antitrust regulator has concluded probes into three AI firms. DeepSeek from China, Mistral AI from France, and Scaleup Yazilim Hizmetleri from Turkey faced scrutiny over AI hallucinations. The companies have now committed to better informing users about potential inaccuracies. They will display permanent disclaimers on their chatbot services. DeepSeek will also invest in technology to reduce these risks. Italy's antitrust authority said on Thursday it had closed investigations into three AI companies over allegedly unfair commercial practices involving generative artificial intelligence, after accepting binding commitments from them. The â regulator, â known as the AGCM, also polices consumer rights. It said it had targeted China's DeepSeek, France's Mistral AI SAS and Turkey's Scaleup Yazilim Hizmetleri Anonim Åirketi over risks of so-called AI hallucinations - the â generation of inaccurate or misleading content. In response, the three companies have agreed â to better inform users about hallucination risks via their websites and apps, adding permanent disclaimers to their chatbot services, the authority said. DeepSeek also agreed to invest in technology to reduce the risk of hallucinations, while acknowledging that current technology cannot prevent them entirely. As part of its commitments, NOVA AI, the â cross-platform chatbot service offered by Scaleup, agreed to make clear to consumers that its service provides a single interface for accessing several chatbots and does not aggregate or process their responses, AGCM said.
Share
Copy Link
Italy's competition authority has closed investigations into three AI companies after securing binding commitments on transparency around AI hallucinations. DeepSeek, Mistral AI, and Nova AI must now display permanent warnings about inaccurate or misleading content, setting a new benchmark for consumer protection in AI chatbot services across Europe.
Italy's competition and consumer protection authority, the AGCM, has closed antitrust probes into AI firms DeepSeek, Mistral AI, and Nova AI after each company agreed to binding commitments from AI companies designed to address AI hallucination risks
1
. The investigations targeted China's DeepSeek, France's Mistral AI, and Turkey's Scaleup Yazilim (operator of Nova AI) over their failure to adequately warn users that their chatbot services could generate inaccurate or misleading content2
. This marks the first time a European regulator has extracted specific, binding commitments on hallucination transparency from AI companies across three different jurisdictions simultaneously, applying the same standard to all1
.
Source: ET
The three casesâPS12942 (DeepSeek), PS12968 (Mistral Le Chat), and PS12973 (Nova AI)âwere opened because the chatbot providers failed to inform users clearly, immediately, and intelligibly about AI hallucinations
1
. The AGCM determined this constituted a potentially unfair commercial practice under Articles 20, 21, and 22 of Italy's Consumer Code, preventing users from making informed decisions about whether to use the services1
. The authority emphasized particular concern in high-stakes areas such as health, finance, and law, where overreliance on AI outputs could cause direct harm. None of the cases resulted in formal infringement findings or fines, as all were resolved through the commitment mechanism under Article 27(7) of the Consumer Code1
. However, non-compliance within a 120-day window would reopen investigations and expose each company to fines of up to approximately $11.6 million1
.DeepSeek, operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, agreed to the most comprehensive package among the three AI companies
1
. The commitments include prominent warnings about AI hallucination risks added directly to chat interfaces and websites in Italian, full Italian-language translation of relevant disclosures, internal compliance training workshops, and an active technical commitment to invest in reducing hallucination rates1
. The AGCM explicitly acknowledged that current technology cannot eliminate AI hallucinations entirely, making DeepSeek's technical investment a forward-looking obligation2
.Mistral AI's commitments for Le Chat developed along four lines under AGCM decision No. 31864: in-chat warnings stating 'Le Chat may make mistakes. Please check responses,' strengthening and Italian localization of terms of service with explicit reference to potentially unreliable AI outputs, improved accessibility of those terms throughout the user journey including homepage, login, registration, app store pages, and chat interface, and full Italian translation of its website and help center
1
. The emphasis was on contextual transparencyâusers must be warned at the moment and place where risk materializes, not merely in terms of service buried in sign-up flows1
.Nova AI's commitments addressed two distinct transparency failures
1
. Beyond standard hallucination warnings, Scaleup Yazilim committed to making explicit that Nova AI functions as a cross-platform aggregator providing a single interface for accessing multiple underlying AI models including ChatGPT, Gemini, Claude, and DeepSeek2
. The company must also disclose that it does not itself aggregate or process responses from these models, addressing concerns that users may have believed they were interacting with a single, proprietary AI1
.Related Stories
The Italy antitrust action establishes a transferable framework that other regulators may adopt
1
. The conceptual argument is straightforward: if a consumer product can cause harm through user overreliance on its outputs, then informing users of that risk at the point of use becomes a consumer protection obligation, not optional transparency1
. This precedent could influence how AI assistants across Europe handle transparency obligations, particularly as the AGCM has been among Europe's most aggressive regulators in AI consumer protection1
. The three companies have now committed to better inform users about hallucination risks via their websites and apps, adding permanent disclaimers to their chatbot services2
. As AI adoption accelerates, regulators and industry observers will be watching whether these transparency standards become the baseline expectation for all chatbot providers operating in European markets.Summarized by
Navi
17 Jun 2025â¢Policy and Regulation

30 Jan 2025â¢Policy and Regulation
24 Dec 2025â¢Policy and Regulation

1
Entertainment and Society

2
Health

3
Technology
