Korean Civil society’s position on the process of drafting and contents of subordinate legislation under the AI Framework Act
“The AI Framework Act must ensure the protection of citizens affected by AI risks through genuine and meaningful public consultation.”
The drafts of the subordinate legislation under the “Framework Act on the Promotion of Artificial Intelligence and the Establishment of a Trust-Based Environment” (hereinafter, the “AI Framework Act”), which will come into effect on January 22, 2025, were released on September 17. Since the AI Framework Act was first discussed in the 21st National Assembly, and continuing through its passage in the 22nd National Assembly ahead of its 2025 enforcement, we, civil society organizations have consistently called for the protection of citizens’ safety and human rights from the risks posed by artificial intelligence. On April 2, we also submitted a document titled “Civil Society’s Opinion on the Subordinate Legislation of the AI Framework Act”, outlining our views on the direction of the enforcement decree. However, the Ministry of Science and ICT (MSIT) has failed to engage in genuine consultation with civil society groups that have long emphasized the need for institutional safeguards to protect the safety and rights of citizens affected by AI risks. In light of this, and ahead of the September 26 consultations that MSIT will hold with various stakeholders — including industry representatives and civil society — on the draft enforcement decree of the AI Framework Act, we are hereby presenting our position.
The MSIT, the government body in charge, stated that it has been “widely collecting opinions from various sectors on the drafting of the AI Framework Act’s enforcement decree and related guidelines since April 11, 2025.” However, this consultation process has been heavily skewed toward industry interests, excluding civil society organizations that have consistently voiced their views throughout the legislative process of the AI Framework Act. Moreover, industry representatives have strongly submitted petitions calling for the postponement of the law’s implementation itself. Bae Kyung-hoon, the current Minister of Science and ICT and a former industry executive, also argued during his confirmation hearing that certain provisions — such as administrative fines — should be postponed or relaxed.
Under the Lee Jae-myung administration, the first meeting between civil society organizations and the MSIT took place on August 12. However, this was not a genuine consultation on the enforcement decree of the AI Framework Act. Rather, it was merely a meeting in which MSIT responded to inquiries and requests for a meeting with the minister ahead of his confirmation hearing. Subsequently, the document titled “Direction for the Subordinate Legislation of the AI Framework Act,” released by MSIT on September 8, did not include a single protective provision proposed by our organizations. While there were 20 consultation sessions with industry representatives, civil society was consulted only twice. Despite this, the document explicitly listed the names of civil society groups that participated in the meetings, creating the misleading impression that their input had been meaningfully considered. Such an approach raises serious doubts about the ministry’s intentions — suggesting that the so-called consultation was less about genuine engagement and more about securing procedural legitimacy by merely demonstrating that civil society had been “consulted.”
Meanwhile, the draft enforcement decree of the AI Framework Act released by MSIT on September 17 is deeply problematic in its overall content.
In South Korea, the risks that AI products and services pose to citizens’ safety and human rights have already become evident — including the spread of AI-generated deepfake sexual exploitation materials, the circulation of generative disinformation, controversies over AI-based textbooks, the unfairness of AI hiring systems, mass layoffs triggered by AI adoption, and delivery robot accidents. These risks are expected to grow even more severe in the future. In this context, the enforcement decree of the AI Framework Act must establish mechanisms that can at the very least safeguard citizens’ safety and fundamental rights.
In particular, for the AI Framework Act to function properly as a foundational law, it is essential that necessary risk mitigation measures be clearly stipulated within the Act itself and its subordinate legislation. This is because such provisions will enable future sector-specific regulations — for example, those governing the use of AI in recruitment — to operate in harmony with the overarching framework established by this law. If, however, the Act and its subordinate legislation include too many exemptions or carve-outs from regulation, there is a significant risk that future special laws addressing AI-related risks in specific domains will either conflict with the AI Framework Act or fail to function effectively alongside it.
However, the draft enforcement decree states that its legislative direction is to “focus more on promotion rather than regulation, establish only the minimum necessary regulations in a reasonable manner, and introduce a flexible regulatory framework” (Legislative Direction, p.3). This orientation not only contradicts global regulatory trends — such as the European Union’s adoption of comprehensive regulatory frameworks to address AI-related risks — but also seriously undermines both the stated purpose of the AI Framework Act (“to protect the rights, interests, and dignity of the people,” Article 1) and the legislative intent to strike a balance between “fostering the AI industry” and “building a foundation for safety and trust.”
Given the lack of meaningful consultation with civil society — including those who may be directly affected by AI-related harms — the industry-oriented outcome reflected in the current draft was hardly unexpected.
In particular, the current draft enforcement decree contains serious problems that must be revised and supplemented through subsequent rounds of public consultation.
First, unlike the European Union’s AI Act, South Korea’s AI Framework Act does not prohibit AI systems that pose significant risks of human rights violations — such as facial recognition in public spaces, systems that exploit vulnerabilities, or emotion recognition technologies used in workplaces and schools. Therefore, at the very least, the enforcement decree’s provisions on the definition of and responsibilities related to “high-impact AI” should include clear and comprehensive measures to address AI systems that endanger safety and human rights. Nevertheless, although the AI Framework Act explicitly delegates authority to further specify the list of high-impact AI systems through the enforcement decree, the current draft fails to include any additional categories or examples of such systems.
Second, the draft subordinate legislation narrowly interprets the term “user operator” beyond the scope of the law itself. As a result, businesses that use AI products or services in the course of their operations are treated merely as “users,” thereby exempting them from any legal obligations. This means that hospitals, recruitment agencies, financial institutions, and other entities that use AI for professional purposes are not required to fulfill responsibilities such as risk management, providing explanations, or ensuring human oversight for those affected — such as patients, job applicants, or loan applicants. Unlike end users that simply use AI products or services “as provided, without modification to their form or content,” businesses that use AI “for operational purposes” and thereby exert a direct impact on affected individuals — such as hospitals, recruitment agencies, and financial institutions — should bear appropriate responsibilities as “user operators.”
Third, while the AI Framework Act excludes AI developed for national defense or security purposes from its scope of application, no legislative efforts are currently underway to regulate such systems. This raises serious concerns about a regulatory vacuum regarding AI technologies that pose the most severe risks to human rights. Despite this, the draft enforcement decree broadly recognizes categories such as “AI developed or used solely for national defense or security purposes,” including those classified as core national security technologies. AI systems that can be used for dual-use purposes must not be exempted under the national defense or security exception.
Fourth, the draft enforcement decree sets an extremely narrow threshold for defining frontier AI — the category of advanced AI systems subject to safety obligations — by specifying that such systems are those with a total training compute of at least 10²⁶ operations. However, it is questionable how many AI systems currently in existence actually meet this threshold. Even if the criteria are adjusted in the future as technology evolves, the threshold should be set at 10²⁵ operations or higher from the outset to ensure that major frontier AI systems are covered under the regulatory framework.
Fifth, enforcement decrees, ministerial notifications, and guidelines each hold different legal statuses. In particular, with respect to the obligations of high-impact AI operators, the enforcement decree stipulates that such obligations “must be fulfilled” (Article 34, Paragraph 1 of the Act), whereas a ministerial notification merely states that operators “may be recommended to comply” (Article 34, Paragraph 2). When a measure is merely recommended — especially if it involves significant costs — it is unrealistic to expect businesses to comply, and it becomes difficult to impose sanctions for non-compliance. Nevertheless, in many cases, the draft enforcement decree fails to address important matters that should be explicitly stipulated, instead relegating them to ministerial notifications or guidelines. Crucially, matters delegated by the law or those that have a direct impact on citizens’ rights and obligations must be stipulated in the enforcement decree itself. Therefore, the key obligations of high-impact AI operators currently described in notifications and guidelines should be incorporated into the enforcement decree.
Sixth, the draft enforcement decree introduces exemptions from fact-finding investigations — despite the absence of any such delegation in the law — and allows for a lengthy (yet unspecified) grace period for enforcement. In effect, this means that even if safety incidents or human rights violations occur due to AI products or services, the state intends to forgo even the most basic administrative investigations or, in practice, refrain from imposing administrative fines (Legislative Direction, p.5). Given that businesses have repeatedly raised complaints about fact-finding investigations and administrative fines, these exemptions or postponements appear to reflect a prioritization of corporate concerns over citizen safety and human rights. Such a regulatory design, in its entirety, effectively signals — at a national policy level — that companies may introduce AI products or services to the market without fulfilling key obligations such as providing explanations, ensuring human oversight, or preparing and retaining documentation, even if those systems could harm consumers or other affected individuals.
We, the undersigned organizations, express our deep disappointment that the current draft of the subordinate legislation sacrifices the safety and human rights of citizens affected by AI risks in favor of promoting industrial development. In the remaining stages of drafting the enforcement decree, ministerial notifications, guidelines, and other subordinate legislation under the AI Framework Act, it is imperative that robust safeguards be introduced to protect citizens’ safety and fundamental rights.
September 25, 2025
MINBYUN – Lawyers for a Democratic Society (Digital Information Committee)
Institute for Digital Rights
Korean Progressive Network Jinbonet
People’s Solidarity for Participatory Democracy (PSPD)