AIEnglish

The use of AI in public spaces to track individuals’ faces and movements remains a serious violation of human rights{/}[Joint Statement] Disappointment over the Constitutional Court’s dismissal of the constitutional challenge against the Biometric AI Identification and Tracking System for Immigration Control

By 2026/03/05No Comments

Disappointment over the Constitutional Court’s dismissal of the constitutional challenge against the Biometric AI Identification and Tracking System for Immigration Control

 

The use of AI in public spaces to track individuals’ faces and movements remains a serious violation of human rights

 

On February 26, the Constitutional Court dismissed a constitutional complaint filed against the Ministry of Justice’s AI Identification and Tracking System for Immigration Control. The case arose after media reports revealed that, between 2019 and 2021, the Ministry of Justice, the Incheon Immigration Office, and the Ministry of Science and ICT had promoted the “Artificial Intelligence (AI) Identification and Tracking System Development Project.” In the course of this project, personal information—including nationality, date of birth, gender, and facial photographs—amounting to 57.6 million records of Korean nationals and 120 million records of foreign nationals was provided to multiple private companies as training data for AI algorithms, triggering significant public controversy.

The petitioners in the constitutional complaint, consisting of both Korean nationals and foreign nationals, argued that transferring personal data—such as photographs collected and stored for immigration inspection purposes—as training data and allowing private companies to process it violated their fundamental rights, including the right to informational self-determination.

The Ministry of Justice and other relevant authorities failed to properly respond to the petitioners’ requests to access and verify whether their personal data had been processed by the system, as well as to the personal data dispute mediation process. They claimed that it was impossible to identify the petitioners within the massive dataset, or that they could not confirm whether the data had been processed because the system and the data had already been destroyed.

In a situation where the state had used, for purposes beyond the original intent, large volumes of personal data collected and stored for immigration inspection—providing them to private companies for algorithm development—while making it difficult even to confirm the resulting harm, the petitioners ultimately appealed to the Constitutional Court to determine the constitutionality of the practice. However, the Constitutional Court dismissed the case on formal grounds, turning away from what could amount to a large-scale human rights violation affecting the majority of both Korean nationals and foreign nationals who have experienced immigration procedures in the country.

The Constitutional Court held that there was no longer any interest in the protection of rights, reasoning that the project in question—under which facial data had been used as training data—had already been terminated following concerns raised by civil society, and that the facial data had also been destroyed. However, in order to hold the respondents accountable for their use of facial data until the project was terminated and the data were destroyed, it was necessary to confirm the unconstitutionality of their actions. Therefore, it is not reasonable to conclude that the interest in the protection of rights has ceased.

The Constitutional Court also assumed that the interest in the protection of rights had disappeared and then concluded that there was no interest in adjudication, stating that it was difficult to determine the risk of repetition. However, the very possibility that facial data may be used in such ways itself poses a risk to the right to informational self-determination. The risk of repetition therefore clearly exists, and it is difficult to accept the Court’s conclusion that there was no interest in adjudication.

The Consitutional Court also held that it is difficult to derive from the Constitution a duty to enact legislation prohibiting the use of personal data such as facial data. However, duties of legislative action may be derived from explicit provisions of the Constitution, the interpretation of relevant fundamental rights and statutes, and the state’s obligation to protect fundamental rights. Therefore, it should be recognized that the state has a duty—arising from the constitutionally guaranteed right to informational self-determination and from its legal obligations under international human rights treaties—to protect citizens’ right to informational self-determination through legislation regulating the use of facial data.

Such reasoning by the Constitutional Court can only be regarded as an inappropriate interpretation that narrowly construes the state’s constitutional responsibility to protect human rights in the age of artificial intelligence.

The Constitutional Court’s decision to dismiss the case is an irresponsible ruling that disregards the potential violations of fundamental rights arising from the processing of training data for AI, a data-driven technology. By leaving only the algorithm while destroying all of the data and records of its processing, the state acted in an opaque and unaccountable manner—yet this conduct has not even been subject to constitutional review.

The state processed vast amounts of data belonging to both Korean nationals and foreign nationals for AI training purposes, but instead of providing remedies for the resulting harm, it avoided even the most basic verification of the facts. This will likely remain a highly inappropriate precedent in terms of the principles of accountability and transparency in the age of artificial intelligence.

Despite the Constitutional Court’s disappointing decision, it remains clear that AI systems that track individuals’ faces and movements constitute a serious violation of human rights. In particular, identifying individuals through sensitive biometric information and tracking them in real time in public spaces such as airports can have a large-scale impact on the fundamental rights of many people, including the right to informational self-determination. Moreover, the very fact of being subject to continuous surveillance can restrict not only the general freedom of action but also the free exercise of fundamental rights such as the freedom of assembly.

Today, more than ever, AI is rapidly penetrating the everyday lives and labor of ordinary citizens. In this context, it is deeply regrettable that the Constitutional Court has squandered an opportunity to establish standards regarding the human rights implications of AI.

As an institution entrusted with the mission of protecting the freedom and rights of the people from unchecked state power, the Constitutional Court should have taken a more proactive approach and reached a substantive determination on human rights standards concerning AI training data. Given the real possibility that projects to advance AI using data held by public institutions could be pursued in the same or similar ways in the future, the need for constitutional clarification was urgent. Nevertheless, the Constitutional Court has failed to fulfill its unique constitutional responsibility.

Civil society will continue to remain vigilant regarding the human rights risks posed by AI. In particular, if law enforcement agencies—including the police—use biometric technologies to identify and track individuals in real time in public spaces based on their faces or movements, we will continue to question and challenge the constitutional legitimacy of such practices.

AI systems that threaten the human rights of citizens cannot become our future. (End)

 

March 3, 2026

Digital Justice Network, Digital Information Committee of Lawyers for a Democratic Society, Institute for Digital Rights, People’s Solidarity for Participatory Democracy

Share