South Korea officially implemented the AI Basic Act on January 22nd, marking a global milestone. While the European Union (EU) passed the world’s first AI law in 2024, it deferred high-risk regulations until late 2027. This move positions South Korea as the first nation to operate a fully comprehensive legal framework governing the management, industrial fostering, and safety of Artificial Intelligence.
Under the new legislation, AI service providers are strictly mandated to display visible watermarks on all AI-generated audio, images, and video content to ensure transparency. The Act also introduces a critical classification for High-Impact AI, which refers to systems utilized in sectors that significantly influence public safety and fundamental rights, such as energy, healthcare, criminal investigations, and transportation.

Regarding these high-stakes sectors, the law stipulates that relevant businesses must provide prior notification of AI usage to all affected users and are legally required to establish robust risk management measures to mitigate potential harm. Should a company fail to comply with these safety and notification obligations, they may face administrative fines of up to KRW 30 million.
The gaming industry is subject to specific disclosure requirements under the new framework. Developers must now include guidance text stating that a game partially utilizes generative AI, or explicitly label AI-created characters with Made with AI tags. Furthermore, when AI chatbots are integrated to interact with players, the interface must clearly mark the interaction as a “conversation through generative AI.”
However, the Act provides an exception for AI used purely as a tool for internal work efficiency. For instance, the use of video-generation AI for minor background edits in a film does not trigger a mandatory disclosure, as the law distinguishes between AI as a core consumer-facing product and AI as a supportive productivity tool.
Despite the government’s proactive stance, domestic tech companies have expressed significant apprehension. The primary criticism centers on the perceived ambiguity of the legal standards, with many arguing that the criteria for compliance remain poorly defined. The controversy surrounding the scope of High-Impact AI is particularly acute, as the law’s definition of what constitutes a significant impact on sectors like energy or healthcare is seen by many as highly subjective.
Furthermore, fears of reverse discrimination are mounting within the local tech ecosystem. There is a growing concern that while domestic firms will be strictly “shackled” by these new regulations, enforcing the same standards on global AI giants may prove difficult. This potential regulatory discrepancy could leave Korean companies at a competitive disadvantage in their own market. In light of these concerns, the South Korean government has announced it will postpone regulatory enforcement—including fact-finding investigations and the imposition of fines—for a grace period of at least one year to allow for industry adjustment.