South Korea sets AI safety rules – first in world, enforcing its AI Act
Sign up now: Get insights on Asia's fast-moving developments
The law will only apply to three areas: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.
PHOTO: REUTERS
Follow topic:
SEOUL – South Korea will begin enforcing its Artificial Intelligence Act on Jan 22, becoming the first country to formally establish safety requirements for high-performance – or so-called frontier artificial intelligence (AI) systems. It is a move that sets the country apart in the global regulatory landscape.
According to South Korea’s Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies.
Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.
“This is not about boasting that we are the first in the world,” said Mr Kim Kyeong-man, deputy minister of the office of AI policy at the ICT ministry, during a study session with reporters in Seoul on Jan 20.
“We’re approaching this from the most basic level of global consensus.”
The Act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body – the Presidential Council on National Artificial Intelligence Strategy – and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments.
The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, start-up assistance, and help with overseas expansion.
To reduce the initial burden on businesses, the South Korean government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions.
Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law’s scope and how to respond accordingly.
Officials noted that the grace period may be extended depending on how international standards and market conditions evolve.
The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.
High-impact AI refers to fully automated systems deployed in critical sectors such as energy, transportation and finance – areas where decisions made without human intervention could significantly affect people’s rights or safety. At present, the South Korean government says no domestic services fall into this category, though fully autonomous vehicles at Level 4 or higher could meet the criteria in the future.
What distinguishes South Korea’s approach from that of the European Union is how it defines “high-performance AI”. While the EU focuses on application-specific risk – targeting AI used in areas like healthcare, recruitment, and law enforcement – South Korea instead applies technical thresholds.
These include indicators such as cumulative training computation, meaning only a very limited set of advanced models would be subject to the safety requirements.
As of now, the South Korean government believes no existing AI models, either in South Korea or abroad, meet the criteria for regulation under this clause. In comparison, the EU is rolling out its own AI regulations gradually
Enforcement under South Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritises corrective orders for non-compliance, with fines – capped at 30 million won ($26,200) – issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one.
Transparency obligations for generative AI largely align with those in the EU, but South Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin.
For other types of AI-generated content, invisible labelling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
Mr Kim emphasised that the purpose of the legislation is not to hinder innovation but to offer a basic regulatory foundation that reflects growing public concerns. “The goal is not to stop AI development through regulation,” he said. “It’s to ensure that people can use it with a sense of trust.”
He added that the law should be seen as a starting point, not a finished product. “The legislation didn’t pass because it’s perfect,” Mr Kim said. “It passed because we needed a foundation to keep the discussion going.”
Recognising concerns from smaller companies and start-ups, Mr Kim said the South Korean government plans to stay engaged throughout implementation.
“We know smaller companies and ventures have their own worries,” he said. “As issues come up, we’ll work through them together via the support centre.” THE KOREA HERALD/ASIA NEWS NETWORK

