Italy Fines AI Chatbot Replika's Developer $5.6 Million for Data Privacy Violations

2 Sources

Share

Italy's data protection agency has imposed a €5 million fine on Luka Inc., the developer of AI chatbot Replika, for violating user data protection rules. The fine follows an investigation into the company's practices, particularly concerning children's access to the service.

News article

Italy's Data Watchdog Cracks Down on AI Chatbot Replika

In a significant move highlighting the growing scrutiny of AI technologies, Italy's data protection agency, Garante, has imposed a €5 million ($5.64 million) fine on Luka Inc., the developer of the AI chatbot Replika. This action comes as part of a broader effort to enforce data privacy regulations in the rapidly evolving field of artificial intelligence

1

2

.

Replika: The AI 'Virtual Friend'

Replika, launched in 2017 by San Francisco-based startup Luka Inc., offers users customized avatars capable of engaging in conversations. Marketed as a 'virtual friend' that can enhance users' emotional well-being, the platform has gained popularity but also raised concerns about data privacy and child safety

1

2

.

Investigation Findings and Violations

The fine follows an investigation by Garante, which uncovered several breaches of data protection rules:

  1. Lack of Legal Basis: Replika was found to be processing users' data without a proper legal foundation

    1

    2

    .
  2. Absence of Age Verification: The platform had no system in place to verify users' ages, potentially exposing children to inappropriate content

    1

    2

    .
  3. Service Suspension: In February 2023, Garante had already ordered Replika to suspend its service in Italy, citing specific risks to children

    1

    2

    .

Broader Implications for AI Regulation

This case is part of a larger trend of increased regulatory scrutiny of AI platforms in Europe:

  1. Further Investigation: Garante has announced a separate investigation into whether Replika's generative AI system complies with EU privacy rules, particularly regarding the training of its language model

    1

    2

    .
  2. Proactive Regulation: The Italian watchdog is emerging as one of the EU's most proactive regulators in assessing AI platforms' compliance with data privacy rules

    1

    2

    .
  3. Previous Actions: Last year, Garante fined ChatGPT maker OpenAI €15 million after briefly banning the chatbot in Italy over alleged breaches of EU privacy rules

    1

    2

    .

Industry Response and Future Outlook

As of the report, Replika had not immediately responded to requests for comment on the fine and findings

1

2

. This case underscores the challenges AI companies face in navigating the complex landscape of data protection regulations, especially when their services involve personal data and potential interaction with minors.

The action against Replika serves as a reminder to AI developers and companies of the importance of building robust data protection and age verification systems into their products from the outset. As AI technologies continue to advance and integrate into daily life, we can expect to see more regulatory actions and the potential development of more comprehensive AI-specific regulations across the EU and globally.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo