AIEnglish인공지능

Issues Discussed in the AI Session at the 2025 IGF

By 2025/07/15 No Comments

Issues Discussed in the AI Session at the 2025 IGF

– Byoung-il Oh (Korean Progressive Network Jinbonet)

 

The 2025 IGF, the 20th annual meeting, was held in Lillestrøm, Norway, from June 23 to 27, 2025. Personally, it was my first time attending since the 2014 IGF in Istanbul—after a gap of 11 years. While the overall structure of the forum has remained largely the same, the formats of the sessions have become more diverse, and the duration of each session seems to have shortened to around an hour.

At this year’s IGF, I primarily attended sessions related to artificial intelligence. In recent years, AI has emerged as one of the most prominent global issues, and accordingly, many sessions at this IGF focused on matters concerning AI governance. By examining the key issues and perspectives raised in these sessions, we can gain a rough but meaningful understanding of the current state of AI governance.

Currently, the IGF website hosts the outcomes of the forum, including individual session reports, so further details on specific sessions can be found there. The DiploFoundation also used AI to compile and summarize key IGF issues, including the content of each session.
Therefore, rather than providing a detailed account of every session, this report focuses on the issues and perspectives that I personally found noteworthy.

Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content

In this session hosted by RNW Media, the focus was on the Haarlem Declaration, which outlines ethical principles and recommended actions for the use of AI technologies and tools in media and journalism. The declaration presents six core principles, along with illustrative examples:
– Ensuring transparency and explainability
– Promoting ethical data practices
– Safeguarding information integrity & content authenticity
– Minimizing bias, harm, and discrimination in use of AI tools
– Centring people over technology
– Balancing environmental impact of AI use
Given that Korean civil society, including JinboNet, has recently been exploring principles for the ethical use of AI technologies and tools, this declaration offers valuable reference points.

This session also featured specific case studies, one of the most compelling being the example presented by 7amleh, a Palestinian digital rights organization. 7amleh shared research findings on platform accountability during conflict situations and the development of localized AI models for managing Hebrew and Arabic content. They introduced a language model tool built on AI technologies, designed to monitor the spread of hate speech and violence on social media platforms in both Hebrew and Arabic, within the specific context of Palestine. After the workshop, I looked into their work further and found that they operate a platform that tracks and visualizes trends in hate speech on social media in real time. It raises interesting questions about the concrete technical approaches that nonprofit organizations can take to develop AI tools for public-interest purposes.

Open Forum #82 Catalyzing Equitable AI Impact: The Role of International Cooperation

This session, held as a preparatory event for the AI Impact Summit scheduled to take place in India in February 2026, explored ways to address AI inequality and promote equitable access for developing countries through international cooperation. Key barriers to fair access were identified, including lack of infrastructure such as connectivity, electricity, and GPUs; gaps in technical capacity and education; and limited availability of relevant datasets. To overcome these challenges, the session emphasized the importance of inclusive multilateral cooperation through organizations such as UNESCO, the International Telecommunication Union (ITU), the OECD’s Global Partnership on AI, and various UN initiatives.

A particularly resonant remark came from Sharad Sharma, founder of iSPIRT and a panelist at the session. He stated that we have failed to adequately address the harmful impacts of AI proliferation on social media. According to him, the current system has tended to empower states over citizens and businesses over consumers. He emphasized that we must not continue the practices that have led to these failures.

While I agree that international cooperation is essential to addressing AI inequality, the reality is that intense competition in AI development is unfolding across both corporations and nations globally. Moreover, there are significant challenges in establishing effective and enforceable norms for AI governance. In this context, it remains deeply unclear what realistic solutions might exist for overcoming AI inequality.

WS #219 Generative AI & LLMs in Content Moderation: Rights & Risks

This session focused on the human rights implications of using large language models (LLMs) for content moderation on social media platforms. Unlike high-resource languages such as English, languages with limited training data face a higher risk of human rights violations—particularly concerning freedom of expression, privacy, and protection against discrimination—when content moderation is driven by LLMs. Cases were shared such as Instagram erroneously labeling content related to Al-Aqsa Mosque as linked to terrorist organizations, and a Palestinian construction worker who was unjustly detained due to a mistranslation by Facebook—both of which are already known in Korea. A trade-off exists between accuracy and coverage in LLM-based moderation. For instance, following the Hamas attack on Israel on October 7, Meta reportedly lowered the confidence threshold of its hate speech classifier for Arabic content from 85% to 25%, leading to the mass deletion of comments from Palestinian users. This suggests that platforms may manipulate algorithmic thresholds to avoid accountability, resulting in large-scale content suppression.
While technical solutions—such as incorporating input from local communities and accounting for low-resource languages in LLM development—can certainly be part of the response to these issues, I believe the more crucial factor is strengthening transparency and accountability on the part of the platforms themselves. As the Meta example illustrates, this is not merely a technical problem, but one that can be influenced by political decisions made by platform companies. In particular, the business models of these platforms can shape how content moderation algorithms are designed and deployed. In this regard, the risk assessments required of very large online platforms (VLOPs) under the EU Digital Markets Act deserve close attention. They may offer a potential mechanism for addressing these systemic problems by requiring platforms to evaluate the societal and human rights impacts of their systems.

Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action

This session centered around the Joint Statement on Artificial Intelligence and Human Rights 2025, issued by the Freedom Online Coalition (FOC) in June 2025. The FOC is an intergovernmental alliance committed to promoting human rights and online freedom, currently comprising 42 member governments. South Korea joined the coalition in 2023. However, the Yoon Suk-yeol administration has faced strong criticism for suppressing freedom of expression both online and offline, earning a reputation—especially among civil society groups—for its repressive stance on dissent.

The statement expresses concern that “(T)oday, AI systems are used systematically to suppress dissent, manipulate public discourse, amplify gender-based violence, enable unlawful and arbitrary digital surveillance, and reinforce inequalities and discrimination.” It declares a commitment to “strive for frameworks that are firmly rooted in and in compliance with international law, including international human rights law, developed responsibly through inclusive, multistakeholder processes and serve human needs and interests while respecting full enjoyment of human rights and fundamental freedoms.”

In this context, the statement welcomes global initiatives such as the UN General Assembly Resolution 78/265 on trustworthy AI, as well as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. The statement also underscores the role of private companies, asserting that “they bear meaningful responsibility to respect human rights, guided by frameworks such as the UN Guiding Principles on Business and Human Rights, and to identify, mitigate, and prevent adverse human rights impacts of their operations by integrating safety-by-design principles into both system design and governance models.” As of now, 25 countries have signed the statement—South Korea is not among them.

This session highlighted the need for human rights-based AI governance in response to threats posed by AI, including arbitrary government surveillance, disinformation, and bias and discrimination against marginalized groups. Key international governance mechanisms were discussed, such as the EU AI Act, the UN Global Digital Compact, and the Council of Europe’s Framework Convention on AI. The session also addressed the responsibility of private companies across the entire AI lifecycle—from design and development to deployment and oversight.

These government efforts are certainly a welcome development. However, in other arenas, there remains a contrasting narrative—one that treats calls for safeguards to protect human rights from AI-related risks as regulatory obstacles to innovation, as if AI posed no real danger at all. For trustworthy AI governance to become a reality, this double standard must be abandoned, and both governments and corporations must fulfill their responsibilities.

WS #187 Bridging Internet & AI Governance: From Theory to Practice

This session explored how the core values of the internet could be applied to AI governance.
Discussions focused on the differences between the fundamental nature of the internet and that of AI, and how the values and lessons of internet governance might inform the development of governance frameworks for AI. The internet was built on principles of openness, decentralization, transparency, and interoperability. In contrast, contemporary AI systems—particularly large language models—are being developed in centralized and opaque ways, largely controlled by a handful of major corporations.

The question of how principles of internet governance could be applied to AI governance was initially intriguing—but as the discussion progressed, I found it increasingly confusing. While structural differences between the internet and AI were highlighted, I began to wonder: are these differences truly inherent? The values and principles of internet governance—such as openness, freedom, and respect for human rights—were not embedded in the technology itself from the start; rather, they were shaped by communities committed to those ideals. Although the open and decentralized nature of the internet is often credited with enabling freedom and innovation, we must also acknowledge that this very structure eventually gave rise to an environment dominated by a few monopolistic platforms. Similarly, while today’s AI systems are often centralized and opaque, there is nothing inherently necessary about that. AI does not have to be built this way.

Perhaps these kinds of questions arise because there is a growing demand to rethink the way AI systems are being developed and governed—currently in ways that are often unequal and opaque. Even if they have not always been upheld in practice, the principles of internet governance reflect values that communities have collectively agreed upon over time. Grounding AI governance in similarly shared and negotiated values would be both meaningful and necessary. In that spirit, this session also put forward several key proposals: enhancing transparency and explainability in AI systems; developing interoperability standards; ensuring that AI chatbots do not act as gatekeepers but instead preserve diversity of information sources; implementing regulations to address AI-related risks; and promoting the inclusion of the Global South through a multistakeholder approach.

Clearly, current AI governance remains vague, and in many respects, the ways in which AI systems are being developed and deployed diverge from the foundational principles of the internet. While the structural differences between the internet and AI must certainly be taken into account, the principles of internet governance can still offer valuable guidance in shaping more desirable and accountable AI governance frameworks.

WS #362 Incorporating human rights in AI Risk Management

Hosted by the Global Network Initiative (GNI), this session focused on strategies for integrating human rights into AI risk management practices. The GNI is a multistakeholder initiative that brings together academia, civil society, companies, and investors to promote accountability, shared learning, and collective advocacy at the intersection of technology and human rights. GNI has developed implementation guidelines for AI and human rights principles and participates in the OECD’s AI expert network. It is also involved in the B-Tech Project’s Generative AI Human Rights Due Diligence initiative, which aims to provide practical guidance for companies conducting human rights impact assessments of AI systems.

One particularly memorable moment from the session was when a panelist remarked, “we tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody, and as a point of departure, it really is a very good place to start…” I found myself strongly agreeing with this point—especially given that in Korea, both government and industry often emphasize AI ethics over human rights. While it is certainly positive for organizations to develop and follow their own ethical principles, it becomes problematic when such principles are used as a justification to avoid meaningful AI regulation.

Another meaningful aspect of this session was the broad consensus around the need for mandatory human rights impact assessments (HRIAs) for high-risk AI systems. Participants expressed strong support for a risk-based regulatory approach, similar to that of the EU AI Act. During the open floor discussion, I shared that the National Human Rights Commission of Korea released an AI Human Rights Impact Assessment tool in 2024. I had been involved in both the development of this tool and in a pilot assessment earlier this year. Based on that experience, I emphasized that HRIAs are not merely checklists, but valuable processes that enhance communication among stakeholders and help mitigate risks by bringing together diverse perspectives. However, I also pointed out that the tool is not yet widely used—primarily because there is no legal obligation to conduct such assessments. Even Korea’s Basic AI Act only recommends, rather than mandates, the use of HRIAs. I argued that in order for HRIAs to be meaningfully implemented, they need to be made mandatory.

The session also raised an important point: a human rights-based approach alone may not be sufficient to capture the full range of AI’s societal impacts. While AI certainly affects individuals, its influence may be even more significant at the societal level—in areas such as education, employment, and social structures. Human rights impact assessments (HRIAs) may have limitations in fully addressing these broader effects. In my opinion, it may be necessary to distinguish between two types of assessments: one focused on the human rights impact of a specific AI system, and another focused on the broader societal impact of a particular class of AI technologies. For example, we might differentiate between an HRIA for a specific chatbot like ChatGPT, and a broader evaluation of the societal implications of AI chatbots as a whole.

Open Forum #27 Make Your AI Greener : a Workshop on Sustainable AI Solutions

This session, hosted by UNESCO, focused on identifying practical solutions for achieving sustainable AI. Given that discussions on the environmental sustainability of AI are still relatively limited in South Korea, I was particularly interested in this session. Several key proposals were put forward:
– Shifting from large, energy-intensive models to smaller, domain-specific ones
– Developing new performance metrics that account not only for accuracy but also environmental impact
– Encouraging open-source collaboration and data sharing to avoid duplicated efforts
– Ensuring transparency in reporting AI energy consumption to support informed decision-making
– Establishing comprehensive governance measures—including procurement policies, regulatory frameworks, education, international cooperation, and incentive structures—to promote the sustainable development and deployment of AI
A few days after this session, UNESCO released a report on resource-efficient generative AI.

Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative

Autonomous Weapon Systems (AWS) represent one of the most serious threats posed by artificial intelligence, yet public debate around the issue remains limited—largely because it is tied to national security concerns. One of the most critical and contentious issues in discussions around AWS is that of human control, and this session reflected clear differences in perspective among participants. Human rights organizations emphasized that allowing machines to autonomously make life-and-death decisions violates human dignity, and that responsibility must always rest with the human actors deploying the system. In contrast, industry representatives argued that military command and control have long operated on delegated autonomy, and that autonomous weapons should be viewed not as a revolution, but as an evolution. They also contended that risks associated with AI weapons could be managed through explainable AI and improvements in precision.

An industry panelist argued that while engaging in an AI arms race is deeply undesirable, it would be far worse to lose that race to authoritarian states. In response, the Chinese panelist objected to framing certain countries as inherently “good” or “bad,” and also opposed the tendency to group Russia and China together as a single bloc.

There was broad consensus on the urgent need for international efforts to regulate Autonomous Weapon Systems (AWS). However, geopolitical tensions and the rapid pace of technological development continue to hinder meaningful progress in regulation. The Austrian ambassador underscored the urgency of the moment, stating, “This could be our generation’s Oppenheimer moment.”

It is essential that all stakeholders—including civil society—actively participate in these discussions and apply pressure for the establishment of international regulations on Autonomous Weapon Systems (AWS). For reference, Jinbonet organized a workshop on military AI issues at the 2025 Korea Internet Governance Forum, held on July 3, 2025.

Open Forum #17 AI Regulation: Insights from Parliaments

This session featured updates and perspectives from various regional and national parliaments—including those in Europe, Egypt, Uruguay, Bahrain, and several African countries—on the current state of AI regulation and related challenges.

The European Union passed its AI Act in 2024 and has begun phased implementation. However, a surprising development is that discussions are reportedly underway within the EU about delaying enforcement. In particular, companies have raised concerns about not having sufficient time to adapt to the regulations concerning high-risk AI systems. This appears to be linked to the intensifying global race in AI development, especially in the context of growing regulatory competition among countries—exacerbated by the deregulatory stance of the second Trump administration.

In South Korea, the Basic AI Act was passed in December 2025 and is scheduled to take effect in January 2026. Despite being widely criticized as a weak, industry-friendly law with minimal regulatory substance, major industry players in Korea are still calling for a three-year postponement of its enforcement.

Multistakeholder Dialouge: Aspirations for the India AI Impact Summit

The AI Impact Summit is scheduled to take place in New Delhi, India, on February 19–20, 2026. It follows a series of previous global summits on AI: the AI Safety Summit held in Bletchley, UK, in November 2023; the AI Seoul Summit in May 2024; and the AI Action Summit in Paris in February 2025.

In parallel with the open forum titled “Catalyzing Equitable AI Impact: The Role of International Cooperation“, a stakeholder consultation meeting—though not an official session—was held on June 25 from 5:30 to 7:00 p.m. This meeting was organized to gather input from various stakeholders on key issues, the preparatory process, and potential outcomes for the upcoming AI Impact Summit.

Unlike previous AI summits, the Indian government has made a notable effort to engage stakeholders—including academia, civil society, industry, and the tech community—early in the planning stages. This inclusive approach is a welcome and positive development.