South Korea launches world's first comprehensive AI law amid startup concerns over compliance

5 Sources

Share

South Korea enacted the AI Basic Act, becoming the first nation to implement comprehensive legislation to regulate artificial intelligence. The law mandates watermarks for AI-generated content and human oversight for high-impact AI systems. However, tech startups warn that vague legal language and compliance burdens could stifle innovation, while civil society groups argue the protections don't go far enough.

South Korea Enacts World's First Comprehensive AI Law

South Korea has officially enacted the AI Basic Act, becoming the first country globally to implement what it bills as the world's first comprehensive AI law

1

5

. The legislation, which took effect on January 22, arrives amid growing global unease over artificially created media and automated decision-making, positioning South Korea ahead of comparable efforts like the EU AI Act, which is being applied in phases through 2027

2

. The law represents a central pillar of South Korea's ambition to become one of the world's three leading AI powers alongside the US and China, with government officials maintaining the legislation is 80-90% focused on promoting industry rather than restricting it

1

.

Source: Korea Times

Source: Korea Times

Mandatory Watermarks for AI Content and Human Oversight Requirements

The AI Basic Act introduces strict requirements to regulate artificial intelligence services and ensure safe AI usage

5

. Companies providing AI services must add invisible digital watermarks for clearly artificial outputs such as cartoons or artwork, while realistic deepfakes require visible labels

1

. The law mandates transparency measures to prevent misinformation and deepfakes, establishing watermarks for AI content as what the Ministry of Science and ICT calls "the minimum safeguard to prevent side effects from the abuse of AI technology"

5

. High-impact AI systems, including those used for medical diagnosis, hiring, loan approvals, nuclear safety, and transport, will require operators to conduct risk assessments and document how decisions are made, with mandatory human oversight

1

2

. Companies that violate the rules face penalties of up to 30 million won (approximately $20,400 or £15,000), though the government has promised a grace period of at least a year before imposing fines

1

2

.

Source: ET

Source: ET

Startups Sound Alarm Over Compliance Burdens and Vague Legal Language

Despite the government's assurances, tech startups have raised significant concerns about compliance burdens for startups and the practical challenges of implementation

2

. A December survey from the Startup Alliance found that 98% of AI startups were unprepared for compliance, with co-head Lim Jung-wook expressing widespread frustration: "There's a bit of resentment. Why do we have to be the first to do this?"

1

. Companies must self-determine whether their systems qualify as high-impact AI systems, a process critics say is lengthy and creates uncertainty

1

. The vague legal language has left businesses in limbo, with unclear guidance on critical definitions

4

. Jeong Joo-yeon, a senior researcher at the Startup Alliance, noted that the law's language was so vague that companies may default to the safest approach to avoid regulatory risk . Professor Lee Seong-yeob from Korea University warned that the regulatory framework risks dampening innovation if engineers begin second-guessing whether their work might inadvertently breach the law

4

.

Competitive Imbalance and Global Tech Giant Requirements

A particular point of contention involves competitive imbalance: all Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds must comply

1

. Global companies offering AI services in South Korea must designate a local representative if they meet any of the following criteria: global annual revenue of 1 trillion won ($681 million) or more, domestic sales of 10 billion won or higher, or at least 1 million daily users in the country

5

. Currently, OpenAI and Google fall under these criteria

1

5

. However, much deepfake or misleading content comes from overseas apps beyond South Korea's legal reach, and only a few global tech giants meet the high threshold for local representation requirements

4

.

Civil Society Groups Argue Protections Fall Short

While startups worry the law goes too far, civil society groups maintain it doesn't go far enough to protect citizens

1

. Four organizations, including Minbyun, a collective of human rights lawyers, issued a joint statement arguing the law contained almost no provisions to protect citizens from AI risks

1

. The groups noted that while the law stipulated protection for "users," those users were hospitals, financial companies, and public institutions that use AI systems, not people affected by AI-generated content

1

. The country's human rights commission criticized the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots

1

. This criticism carries particular weight given that South Korea accounts for 53% of all global deepfake pornography victims, according to a 2023 report by Security Hero

1

.

Governance Structure and Future Implementation

The regulatory framework creates a national governance structure centered on a national AI committee chaired by the president, mandating an AI master plan every three years

4

. The Ministry of Science and ICT stated it expects the law to "remove legal uncertainty" and build "a healthy and safe domestic AI ecosystem," adding that it would continue to clarify the rules through revised guidelines

1

. Science minister Bae Kyung-hoon, a former head of AI research at electronics giant LG, told a press conference that the law will provide a "critical institutional foundation" for South Korea's ambition to become a top-three global AI powerhouse . The ministry has said it plans a guidance platform and dedicated support centre for companies during the grace period, and authorities are considering extending the grace period if domestic and overseas industry conditions warrant such a measure . Alice Oh, a computer science professor at the Korea Advanced Institute of Science and Technology (KAIST), noted that while the law was not perfect, it was intended to evolve without stifling innovation

1

. The legislation represents a distinct path from the EU's strict risk-based regulatory model, the US and UK's largely sector-specific approaches, or China's combination of state-led industrial policy and detailed service-specific regulation

1

. As global divisions persist over how to regulate artificial intelligence, with the US favoring a more light-touch approach to avoid stifling innovation and China proposing a body to coordinate global regulation, South Korea's experiment in balancing public trust with industry promotion will be closely watched by policymakers worldwide

3

.

Source: Korea Times

Source: Korea Times

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo