South Korea has officially implemented a pioneering law aimed at regulating the safe utilization of artificial intelligence (AI) models, making it the first country worldwide to do so. The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, known as the AI Basic Act, came into effect on Thursday. This law establishes a regulatory framework to combat misinformation and other potential risks associated with the burgeoning field of AI.
The new law focuses on holding companies and AI developers accountable for addressing issues like deepfake content and misinformation that can be produced by AI models. It empowers the government to levy fines or conduct investigations in cases of non-compliance. Specifically, the legislation introduces the concept of “high-risk AI,” which pertains to AI models used in critical areas such as employment processes, loan assessments, and medical guidance that can significantly impact individuals’ daily lives and safety.
Entities utilizing high-risk AI models are mandated to disclose to users that their services are AI-based and are responsible for ensuring safety measures. Moreover, content generated by AI models must bear watermarks indicating its AI-generated origin as a precaution against potential misuse of AI technology, such as deepfake content. Notably, global companies providing AI services in South Korea meeting specific revenue or user criteria are required to appoint a local representative, with current examples including OpenAI and Google.
Violations of the AI regulation law could result in fines of up to 30 million won, with a one-year grace period planned for the enforcement of penalties to facilitate private sector compliance with the new regulations. Additionally, the law includes provisions for the government to support the AI industry, mandating the science minister to present a policy roadmap every three years.
