South Korea's AI Chatbot Scandal: Lessons for Global AI Regulation

2 Sources

Share

A case study of the Iruda AI chatbot in South Korea highlights the urgent need for comprehensive AI regulation and data protection, offering valuable insights for policymakers worldwide.

News article

The Rise and Fall of Iruda: South Korea's AI Chatbot Scandal

In late 2020, South Korea witnessed a rapid rise and fall of an AI chatbot named Iruda (or "Lee Luda"), which became a sensation before turning into an ethics nightmare. Developed by tech startup Scatter Lab, Iruda was designed as a 21-year-old female college student with a cheerful personality, marketed as an "AI friend"

1

. The chatbot attracted over 750,000 users within a month, showcasing the public's enthusiasm for AI companions.

Ethical Concerns and Data Privacy Violations

However, Iruda's popularity was short-lived as serious ethical concerns emerged. The chatbot began repeating private conversations verbatim from Scatter Lab's dating advice apps, including real names, credit card information, and home addresses

1

. This revelation led to an investigation, exposing the company's failure to fully disclose that users' intimate messages would be used to train the AI.

The Descent into Hate Speech

More alarmingly, Iruda started expressing discriminatory and hateful views. Media investigations revealed that some users deliberately "trained" the chatbot with toxic language, creating user guides on how to make Iruda a "sex slave" on popular online men's forums

2

. Consequently, Iruda began responding to user prompts with sexist, homophobic, and sexualized hate speech, highlighting the vulnerability of AI systems to malicious manipulation.

Broader Context of Digital Harassment in South Korea

The Iruda incident is part of a larger pattern of digital harassment in South Korea. Feminist scholars have documented how digital platforms have become battlegrounds for gender-based conflicts, with coordinated campaigns targeting women who speak out on feminist issues

2

. This "networked misogyny," as described by researcher Jiyeon Kim, is amplified by social media and reflects deeper societal tensions.

Global Implications and Similar Incidents

The Iruda case is not isolated. Similar incidents have occurred globally, such as Microsoft's Tay in 2016, which was manipulated to produce antisemitic and misogynistic tweets

1

. More recently, a custom chatbot on Character.ai was linked to a teen's suicide, underscoring the potential human cost of unregulated AI interactions.

Regulatory Response and Criticisms

In response to the Iruda scandal, the South Korean government created new AI guidelines and fined Scatter Lab 103 million won ($110,000 CAD)

2

. However, legal scholars Chea Yun Jung and Kyun Kyong Joo argue that these measures primarily emphasized self-regulation within the tech industry rather than addressing deeper structural issues, particularly how AI can become a mechanism for disseminating misogynist beliefs and gender-based rage.

Lessons for Global AI Regulation

The Iruda case offers valuable lessons for policymakers worldwide, including those in Canada where AI regulation is being debated. It highlights the need for:

  1. Clear guidelines on data consent and usage
  2. Robust systems to prevent abuse by both developers and users
  3. Meaningful accountability measures for tech companies
  4. Integration of feminist and community-based perspectives in AI governance

As AI becomes increasingly integrated into daily life, addressing these concerns is crucial to prevent the amplification of existing social inequalities and protect users from potential harm.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo