Italy closes antitrust probes into DeepSeek, Mistral, and Nova AI over hallucination warnings

2 Sources

Share

Italy's competition authority has closed investigations into three AI companies after securing binding commitments on transparency around AI hallucinations. DeepSeek, Mistral AI, and Nova AI must now display permanent warnings about inaccurate or misleading content, setting a new benchmark for consumer protection in AI chatbot services across Europe.

AGCM Sets New Standard for AI Hallucination Transparency

Italy's competition and consumer protection authority, the AGCM, has closed antitrust probes into AI firms DeepSeek, Mistral AI, and Nova AI after each company agreed to binding commitments from AI companies designed to address AI hallucination risks

1

. The investigations targeted China's DeepSeek, France's Mistral AI, and Turkey's Scaleup Yazilim (operator of Nova AI) over their failure to adequately warn users that their chatbot services could generate inaccurate or misleading content

2

. This marks the first time a European regulator has extracted specific, binding commitments on hallucination transparency from AI companies across three different jurisdictions simultaneously, applying the same standard to all

1

.

Source: ET

Source: ET

Understanding the Consumer Protection Obligation

The three cases—PS12942 (DeepSeek), PS12968 (Mistral Le Chat), and PS12973 (Nova AI)—were opened because the chatbot providers failed to inform users clearly, immediately, and intelligibly about AI hallucinations

1

. The AGCM determined this constituted a potentially unfair commercial practice under Articles 20, 21, and 22 of Italy's Consumer Code, preventing users from making informed decisions about whether to use the services

1

. The authority emphasized particular concern in high-stakes areas such as health, finance, and law, where overreliance on AI outputs could cause direct harm. None of the cases resulted in formal infringement findings or fines, as all were resolved through the commitment mechanism under Article 27(7) of the Consumer Code

1

. However, non-compliance within a 120-day window would reopen investigations and expose each company to fines of up to approximately $11.6 million

1

.

Company-Specific Commitments and Disclaimers on Chatbot Services

DeepSeek, operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, agreed to the most comprehensive package among the three AI companies

1

. The commitments include prominent warnings about AI hallucination risks added directly to chat interfaces and websites in Italian, full Italian-language translation of relevant disclosures, internal compliance training workshops, and an active technical commitment to invest in reducing hallucination rates

1

. The AGCM explicitly acknowledged that current technology cannot eliminate AI hallucinations entirely, making DeepSeek's technical investment a forward-looking obligation

2

.

Mistral AI's commitments for Le Chat developed along four lines under AGCM decision No. 31864: in-chat warnings stating 'Le Chat may make mistakes. Please check responses,' strengthening and Italian localization of terms of service with explicit reference to potentially unreliable AI outputs, improved accessibility of those terms throughout the user journey including homepage, login, registration, app store pages, and chat interface, and full Italian translation of its website and help center

1

. The emphasis was on contextual transparency—users must be warned at the moment and place where risk materializes, not merely in terms of service buried in sign-up flows

1

.

Nova AI's commitments addressed two distinct transparency failures

1

. Beyond standard hallucination warnings, Scaleup Yazilim committed to making explicit that Nova AI functions as a cross-platform aggregator providing a single interface for accessing multiple underlying AI models including ChatGPT, Gemini, Claude, and DeepSeek

2

. The company must also disclose that it does not itself aggregate or process responses from these models, addressing concerns that users may have believed they were interacting with a single, proprietary AI

1

.

Implications for AI Assistants and Consumer Rights

The Italy antitrust action establishes a transferable framework that other regulators may adopt

1

. The conceptual argument is straightforward: if a consumer product can cause harm through user overreliance on its outputs, then informing users of that risk at the point of use becomes a consumer protection obligation, not optional transparency

1

. This precedent could influence how AI assistants across Europe handle transparency obligations, particularly as the AGCM has been among Europe's most aggressive regulators in AI consumer protection

1

. The three companies have now committed to better inform users about hallucination risks via their websites and apps, adding permanent disclaimers to their chatbot services

2

. As AI adoption accelerates, regulators and industry observers will be watching whether these transparency standards become the baseline expectation for all chatbot providers operating in European markets.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved