Check-in and access this session from the IGF Schedule.

IGF 2024 Open Forum #26 High-level review of AI governance from Inter-governmental P

    Theater
    Duration (minutes): 60
    Format description: This session plan to hold by panel discussion styles. We need 60 minutes to hear opinions from floor and to do two-way discussion.

    Description

    Following the multi-stakeholder consultation on the Hiroshima AI Process held at the 2023 IGF Kyoto meeting and leading up to IGF 2024, discussions on AI governance progressed in multilateral frameworks such as the Hiroshima AI Process Comprehensive Policy Framework, the UN Resolution on AI, the UN High Level Advisory Body on AI, the Global Digital Compact, and G7, G20, and OECD. Yoichi Iida, who chaired the Working Group of Hiroshima AI Process, which is a major initiative on governance of advanced AI systems such as generative AI, will review the progress of AI governance discussions since last year. Also, he discusses with experts from various communities about what kind of AI governance measures should be taken, focusing on the monitoring mechanism discussed in the Hiroshima AI Process, to engage private companies in AI governance, which will be important from the perspective of ensuring effectiveness.

    1) How will you facilitate interaction between onsite and online speakers and attendees? Our session plan to take questions from online participants.

    2) How will you design the session to ensure the best possible experience for online and onsite participants? Our session plan to take questions from floor actively both online and onsite participants.

    3) Please note any complementary online tools/platforms you plan to use to increase participation and interaction during the session. Our session explores to using some AI technology to increase participation.

    Organizers

    Organizational Affiliation: Ministry of Internal Affairs and Communications, Japan

    Moderator and Organizer: Yoichi Iida (main), Yuichi Tsuji (sub), Satoka Kawahara (sub),Honoka Ninagawa (sub)

    Stakeholder: Government

    Regional Group: Asia-Pacific

    Speakers

    Organizational Affiliation: Ministry of Internal Affairs and Communications, Japan

    Moderator: Yoichi Iida

    Speakers:

    • Ms. Audrey Plonk, Deputy Director, STI, OECD (online)
    • Mr. Henri Verdier, Ambassador for Digital Affairs, France
    • Mr. Levy Syanseke, Zambia Youth IGF, Ms. Melinda Claybaugh, Director of Privacy Policy, Meta
    • Ms. Thelma Quaye, Director of Infrastructure, Skills and Empowerment
    Onsite Moderator

    Yoichi Iida

    Online Moderator

    Satoka Kawahara

    Rapporteur

    Yuichi Tsuji

    SDGs

    1. No Poverty
    8. Decent Work and Economic Growth
    9. Industry, Innovation and Infrastructure
    10. Reduced Inequalities
    17. Partnerships for the Goals

    Targets: While AI has the potential to dramatically improve productivity and transform our society, it may have also raised issues such as increased discrimination based on data bias and high energy consumption. Appropriate AI governance is essential for the realization of opportunity brought by AI, and it is also expected that appropriate AI governance will contribute to the improvement of poverty and inequality (SDGs 1,10), the promotion of inclusive economic growth and innovation (SDGs 8,9), and the realization of sustainable development (SDGs17).

    Key Takeaways (* deadline at the end of the session day)

    The session recognized that there have been growing discussions about risks and safety as AI becomes more widespread, but we still have challenges related with various aspects of society including harmful impacts on human rights, privacy and copyright.

    There are also challenges related to development and deployment of AI systems in societies in all regions around the world, but Africa is one of the regions who faces most serious problems such as data shortages and skill gaps. The session agreed that we need to work together and support those in need to resolve those challenges while balancing the benefits and risks of AI.

    Consolidate discussions on AI and involve youth to collaborate in building AI governance. Companies also have a responsibility to conduct advanced modeling studies and assessments and ensure transparency of risks.

    Call to Action (* deadline at the end of the session day)

    Consolidate discussions on AI and involve youth to collaborate in building AI governance.

    Companies also have a responsibility to conduct advanced modeling studies and assessments and ensure transparency of risks.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    As an introduction to this session, Mr. Yoichi Iida from the Japanese government introduced four participants, including those attending online. Following that, he reviewed the progress of AI governance discussions since last year and focused on the monitoring mechanisms debated in the Hiroshima AI Process. A discussion was then held on how stakeholders should be involved in AI governance to ensure its effectiveness.

     

    【Question 1】
    The panelists were asked to comment on the general status and challenges of AI governance as well as the state of AI governance both domestically and internationally.

    • Mr. Henri Verdier, Ambassador for Digital Affairs, France, stated the following:
      • Perhaps the primary challenge is remarkably simple: it is the question of "what should not be done." Can we confidently ensure that this astonishing AI revolution will lead to progress—not just innovation or power, but genuine progress for humanity? This responsibility likely falls to governments.
      • Balancing innovation and safety, economic growth and equity, efficiency and diversity is critical, and we are striving to achieve this balance.
      • For instance, while the main theme of last year’s discussions on AI governance was "existential risks," the focus has now shifted more toward issues like "equitable development" and whether it addresses the needs of emerging economies. What I consider most important is recognizing the many challenges we face and approaching them with a broad perspective.
      • When we think about safety, it is not merely about preventing AI from going rogue and attacking humanity. It also encompasses issues such as cybersecurity and whether unbiased training data is being used. We must ensure that current inequalities are not perpetuated. Additionally, cultural and linguistic diversity must be taken into account.
      • We will also need to reconsider environmental impacts and intellectual property in the near future. Moreover, concrete policies may be required to cultivate skills and capabilities in emerging economies—not just training engineers but preparing future citizens to adapt to this new world and enabling them to become free-thinking, empowered individuals.
    • Ms. Claybaugh from Meta stated the following:
      • Meta operates as an open-source AI company in the AI field. Our AI technologies are provided as open source, allowing anyone to use them freely. For example, our large language model "Llama" is available in various versions and can be downloaded for free by anyone. We believe this approach is the best way to advance AI innovation.
      • First, for developers, this enables customization and fine-tuning to meet local needs. Economically, we provide an ecosystem where diverse AI tools are accessible, preventing dependence on closed models controlled by a few companies. By building on our technology, others can avoid reliance on external operating systems, which ultimately benefits us as well.
      • Regarding the state of AI governance, there are both positive aspects and challenges. On the positive side, discussions on AI safety are becoming more harmonized at the global level, and there is a deeper understanding of safety risks and measures to mitigate them. Moreover, there is a growing shared understanding of the need for a coordinated approach to this global technology.
      • However, despite the abundance of discussions, they are not always interconnected, which presents a challenge. Progress is being made in establishing industry standards and AI safety research organizations, which are important for providing scientific assessments and benchmarks, but integrating these efforts remains an issue.
      • Lastly, a key challenge is how to reflect the realities of the AI value chain in governance frameworks. Specifically, in the case of open-source AI, there is less control and visibility over downstream usage compared to providers of closed models. Therefore, it is essential to clarify the roles and responsibilities of model developers, model deployers, and downstream application developers.
    • Ms. Quaye, Director of Infrastructure, Skills and Empowerment Smart Africa stated the following:
      • In the context of Africa, I would like to compare AI to "water." Just as water nourishes us and helps crops grow, AI enhances efficiency, processes vast amounts of information, and enables "leapfrogging" development in Africa.
      • In Rwanda, AI and drones are being utilized to deliver blood supplies to rural areas, reducing mortality rates in regions that are difficult to access by car. Similarly, in Ghana, AI is used in precision agriculture, making possible initiatives that current infrastructure could not support.
      • However, like water, AI has "dualities," and if not properly managed, it could cause disasters, just as water can flood crops and homes.
      • That said, efforts are already underway. The African Union (AU) has developed a "Continental AI Strategy," and Smart Africa has created an AI blueprint, with some countries formulating their own national AI strategies. Nevertheless, questions remain as to whether these approaches are harmonized and sufficiently linked as multilateral initiatives.
      • One of Africa’s challenges is data. The number of data centers across the continent is low, equivalent to that of Ireland. This situation needs to be addressed by strengthening infrastructure. While it is possible to use other countries’ infrastructure, doing so risks a loss of sovereignty. To achieve ethical and fair AI, it is crucial to keep data under domestic jurisdiction.
      • There is also a skills gap. Five years ago, there was a focus on training programmers, but now that AI can write code, we need to consider new methods of skills development.
      • Another issue is data sets. Many AI tools rely on data sets from other regions, and there is a lack of data sets unique to Africa. As a result, AI often fails to reflect African cultures and stories. Since the performance of AI is determined by the data it is given, it is essential to build data sets that are appropriate for Africa.
    • Mr. Syanseke from Zambia Youth IGF stated the following:
      • From the perspective of young people, many are making significant contributions to the development of AI, playing roles in the stages of innovation and system operation. On the other hand, while some youth misuse AI for shortcuts to achieve leapfrogging, others utilize it with integrity, transparency, and accountability.
      • While many in the Northern Hemisphere are developing AI systems, the Southern Hemisphere is often utilized for training purposes, and investment in the South remains insufficient. Moreover, Africa faces a shortage of data centers, and data localization is not effectively functioning in practice.
      • In this context, a key priority and expectation from the perspective of young people is how to address the issue of data generated at the local level being managed externally to Africa. Many governments and civil society organizations in African countries rely on global corporate platforms rather than local tools. Consequently, while data generated in Africa is managed in other regions, data governance and regulatory frameworks are being developed within Africa. Bridging this gap is of critical importance.
      • Balancing global data management with local perspectives on data is a significant challenge. Additionally, when considering the extent to which African data contributes to AI systems, a major issue is how much Africa can benefit from the resulting value.
    • Ms. Plonk, Deputy Director, STI, OECD stated the following:
      • Regarding the OECD, there have been significant changes in the field of global internet governance over the past five years. We initially adopted the AI Recommendation, which was revised earlier this year. Additionally, new policy issues have emerged, such as the establishment of the Safety Institute and the integration of the Global Partnership on AI (GPAI) into the OECD’s work program.
      • At the OECD, a community of over 400 experts is actively engaged, and recently, a group specializing in privacy and data in AI was established. This reflects the OECD’s broad sharing of challenges faced across different regions worldwide. Furthermore, the OECD AI Policy Observatory (OECD. AI) collects data and evidence on AI trends, enabling policymakers to use these resources to create supportive policy environments.
      • The OECD aims to provide interoperability and harmonization among different approaches in technology standardization and policy frameworks, applying an analytical and data-driven approach to AI.
      • The adoption rate of AI across industries is still around 8%, with most usage limited to large enterprises. To increase this adoption rate, it is essential to make AI more trustworthy while establishing frameworks to ensure safety, security, and fairness.

    【Question 2】
    The panelists were asked to comment on the measures, actions, and responsibilities that various stakeholders should undertake within the framework of AI governance, based on the responsibilities, roles, and plans of their respective communities.

    • Mr. Verdier, Ambassador for Digital Affairs, France, stated the following:
      • We must recognize that AI is not only a highly promising technology but also a source of power, a subject of competition, and an arena for contests of leadership—whether among companies, between models in a geopolitical context, or among international organizations.
      • For France and many others, one of the greatest threats to the future of AI and its governance is the fragmentation of global governance. Such fragmentation could weaken regulations and governance amid competition, leading to a "race to the bottom." To prevent this, we must remain united and continue exchanging ideas.
      • France places great importance on the political framework provided by the OECD and recognizes the significant achievements of the OECD and G7. However, universal dialogue is also necessary. The Paris Summit, scheduled for 10th and 11th February 2025, is expected to be the largest international summit of its kind to date. The summit will invite 110 heads of state and government leaders, with around 80 expected to attend. In addition, the heads of most international organizations are also set to participate. Furthermore, a very active multi-stakeholder dialogue involving 1,000 to 2,000 representatives from academia, industry, the private sector, and civil society is planned.
      • The agenda consists of four topics: the first focuses on risks, safety, and security; the second on sustainable AI; the third on broad governance; and the fourth on the need for public goods and digital public infrastructure.
    • Ms. Claybaugh from Meta stated the following:
      • Companies bear a critical responsibility. They need to actively participate in ongoing international frameworks and initiatives, adhere to them, and collaborate with their national safety research institutions to advance the research and evaluation of cutting-edge models. It is also essential for developers to transparently disclose how large-scale models are developed, what they are capable of, what risks they entail, and how these risks are being addressed. This represents a significant responsibility for developers.
      • Furthermore, partnerships and public-private collaboration are vital for developing research capabilities and building data that represents the entire world. For instance, Meta is working with the Gates Foundation to develop training data in Africa. Such collaborations aimed at advancing shared goals will become increasingly important in the future.
    • Ms. Quaye, Director of Infrastructure, Skills and Empowerment Smart Africa stated the following:
      • From a governmental perspective, one of the main challenges is the significant gap between "documenting governance" and "implementing governance." From the African viewpoint, it is crucial to consider how governance can be executed effectively. No matter how much time is spent on policies and governance, it is meaningless if they are not implemented.
      • Additionally, a multi-stakeholder approach is essential. AI has the potential to connect the world even more than the internet does. Therefore, a universal approach to AI governance is necessary. We fully support a "universal approach that involves everyone." At Smart Africa, we believe it is essential to involve the private sector, civil society organizations, and governments to build a universal model that extends beyond Africa.
    • Mr. Syanseke from Zambia Youth IGF stated the following:
      • From the perspective of young people, there are two key points. First, it is essential not to exclude youth from discussions on governance. In particular, in Africa, many young people feel that governance is being introduced to suppress innovation. Therefore, governments need to support youth innovation and engage in discussions with them about safety measures for addressing risks.
      • Second, it is vital to create an environment where young people can continue innovating using new technologies and AI. Many foundational technologies and infrastructures have been built by young people, so involving them throughout the entire process—from start to finish—is crucial. Especially in policymaking, it is important to establish mechanisms that ensure the voices of young people are reflected.
    • Ms. Plonk, Deputy Director, STI, OECD stated the following:
      • Governance is more than just regulation. While regulation is highly important, governance also encompasses other tools. On this occasion, I would like to discuss the "Hiroshima AI Process International Code of Conduct Implementation and Reporting Framework," which is now in its final stages. The purpose of this framework is to provide a mechanism through which companies and institutions can publicly report activities related to the international code of conduct. This will help move beyond mere principles to build an information ecosystem that informs policymaking.
      • We often work in a vacuum. While we, who work with AI every day, know a great deal, there is still much we do not know. Bridging this information gap is a critical step toward achieving effective governance and regulation. This effort aims to make different systems as interoperable as possible.
      • This will enable researchers and the general public to share information and utilize it in a comparable format. For example, it will allow us to examine what is happening on a global scale, facilitating the transition from negotiation to concrete implementation.

    Finally, Mr. Yoichi Iida, Ministry of Internal Affairs and Communications, Japan provided a summary.

    • Once the Hiroshima AI Process International Code of Conduct is implemented and operated alongside a monitoring mechanism, it will serve as an experimental framework where the private sector and governments can collaborate to ensure the safe, secure, and trustworthy AI systems. While we understand that this is not the sole solution, we are making significant efforts to build a comprehensive and trustworthy governance framework.
    • Such initiatives should be advanced through the cooperation of various stakeholders, including not only governments but also industry, civil society, academia, and youth. We aim to continue working together to build an open, free, and trustworthy global AI ecosystem that is not fragmented.