IGF 2023 Main Session on Artificial Intelligence

    Time
    Tuesday, 10th October, 2023 (06:15 UTC) - Tuesday, 10th October, 2023 (07:45 UTC)
    Room
    Plenary Room
    About this Session
    Speakers:

    Seth Center- U.S. Department of State’s Deputy Envoy for Critical and Emerging Technology, United States Department of Commerce
    Arisa Ema- Associate Professor, University of Tokyo
    James Hairston- Head of International Policy & Partnership, OpenAI
    Thobekile Matimbe- Senior Manager, Partnerships and Engagements Paradigm Initiative
    Clara Neppel- Senior Director, European Operations, IEEE


    Co-Moderation:
    Christian Gmelin, GIZ
    Maria Paz Canales, Global Partners Digital
     

    In today's world, Artificial Intelligence (AI) plays a pivotal role in transforming industries and daily life. By emulating human cognitive functions like learning, reasoning, and problem-solving, AI becomes a powerful tool driving innovation and addressing complex challenges. To maximize AI's positive impact on society, a responsible and ethical development that is grounded in human rights principles is essential and must be supported by a framework of policies, standards, legislation, and regulation. Additionally, translating these to be a binding  agreement would provide a stronger framework for safeguarding rights, promoting fairness, and mitigating risks associated with AI. It must be emphasized that enforceable laws and policies ensure AI technologies are used responsibly, holding those accountable for non-compliance.  It is, therefore, imperative to take a global collaborative and harmonized approach to the governance framework to achieve the AI we want.

     Today, the evolving landscape of AI principles, norms, standards and frameworks for addressing ethical concerns, transparency, and human rights are disjointed, being put forward by various governments, organizations, and advocates. However, any development of meaningful global standards requires "effective participation from all countries, including developing and developed countries, and inputs from regional initiatives, as well as the engagement of all stakeholders." (see IGF 2022 Addis messages).

    As global agreements on AI are not achieved easily, a central place to discuss, shape, and review policies needs to be identified. As the IGF continues to play a crucial role in shaping the digital landscape, the ever-growing impact of AI demands focused attention.

    The goal of this main session is to discuss how AI could be designed, regulated, and harmonized to better serve all humans and how this can be achieved. It is necessary to assess the state of development of standards and regulations, share knowledge and best practices, and provide a platform for a multistakeholder exchange on how to develop common principles for the AI we want and ensure that we have the right institutions in place to translate them into binding standards and regulations.

    Policy questions that will be addressed during this main session:

    • How could the IGF, as the global multistakeholder and inclusive mechanism within the UN system, be the intermediary to create such a global framework and build or equip the responsible organizations in charge of implementation, monitoring, oversight and compliance?
    • What measures do we want in place, and how should they function, to ensure the effective implementation of global AI policy action that is for the greater good?

    Dr. Seth Center is Deputy Envoy for Critical and Emerging Technology. Previous governmentservice includes as a member of the State Department’s Policy Planning Staff where he helped develop the Department’s cyberspace and emerging technology strategic framework, and as senior advisor to the National Security Commission on Artificial Intelligence where he led the writing of the commission’s final report. Center also served as Director for National Security Strategy and History on the White House’s National Security Council staff, and as an historian on the NSC staff and for the State Department. Outside of government, he was most recently a senior advisor at the Special Competitive Studies Project, and prior to that was a Senior Fellow at the Center for Strategic and International Studies. Center received his PhD from the University of Virginia and BA from Cornell University.

    Arisa Ema is an Associate Professor at the University of Tokyo and Visiting Researcher at RIKEN Center for Advanced Intelligence Project in Japan. She is a researcher in Science and Technology Studies (STS), and her primary interest is to investigate the benefits and risks of artificial intelligence by organizing an interdisciplinary research group. She is a board member of the Japan Deep Learning Association (JDLA). She is also a member of the Council for Social Principles of Human-centric AI, The Cabinet Office, which released “Social Principles of Human-Centric AI” in 2019, and also a member of the Japanese government's AI Strategy Council launched in May 2023. Internationally, she is an expert member of the working group on the Future of Work, GPAI (Global Partnership on AI). 

    James Hairston currently holds the position of Head of International Policy and Partnerships at OpenAI. Prior to this, they worked at Meta in multiple roles, including Senior Director of Policy, Reality Labs, Director of Policy, Reality Labs, Head of AI & AR/VR Policy, and Manager of Global Policy Development. Hairston also worked as a Head of Public Policy at Oculus VR. Earlier in their career, they served as a Program Analyst at Hurricane Sandy Rebuilding Task Force and Policy Advisor to the Administrator at US Small Business Administration. Hairston started their career as a Research Associate in Economic Policy at the Center for American Progress. James Hairston earned their Bachelor of Arts degree in Social Studies from Harvard University in 2007. James went on to attend Stanford Law School, where they earned their Juris Doctor degree from 2007 to 2010.

    Thobekile Matimbe is a human rights lawyer, researcher and social justice activist from Zimbabwe, serving at Paradigm Initiative as Senior Manager of Partnerships and Engagements. She has advocacy experience engaging at national, regional and international platforms. She is the Membership Officer for the African Digital Rights Network and a member of several digital rights coalitions. She manages several digital rights projects, including the Digital Rights and Inclusion Forum (DRIF) and is an Open Internet for Democracy Leader.

    Dr. Clara Neppel is the Senior Director of the IEEE Europe Headquarter in Vienna and Head of the IEEE Technology Center for Climate. IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. She contributes to issues regarding the technology policy of several international organizations, such as the OECD, European Commission, and Parliament or the Council of Europe. She is involved in efforts related to emerging technologies, entrepreneurship, education, as well as societal implications of technology. She joined IEEE after many years with the European Patent Office where she was involved in various aspects relating to innovation, intellectual property, and public policy in the field of information and communication technologies. Dr. Neppel holds a Ph.D. in Computer Science from the Technical University of Munich and a Master in Intellectual Property Law and Management from the University of Strasbourg.

    For more than 15 years, Christian Gmelin has been advising German Development Cooperation (GIZ) projects on how to exploit the innovative potential of a digital society. To this end, he has always been committed to bringing together unlikely allies from the tech-savvy and analog world for the advantage of both.  As a true believer in lifelong learning and digital aficionado, he is keen on harnessing the benefits of digital transformation and, in particular, generative artificial intelligence for a better education. In his former life, Christian studied economics and philosophy in Heidelberg and Berlin - and still firmly believes in the close connection between early romanticism, endless education and the open source movement.

     

    Key Takeaways (* deadline 2 hours after session)
    Inclusivity in AI governance, strong role of standards and certification and adaptive AI governance imperative!
    Call to Action (* deadline 2 hours after session)
    Capacity Building, strengthening Multistakeholder Approach and establishment of feedback mechanisms and accountability
    Session Report (* deadline Monday 20 December) - click on the ? symbol for instructions

    Session Title: The AI We Want
    Date & Time
    10th October 2023, 15.15-16.45 (3.15-4.45 pm)


    Speakers: Arisa Ema, Thobekile Matimbe, Seth Center, James Hairston, Clara Neppel

    Moderation: Maria Paz and Christian Gmelin

    This report summarizes the key takeaways and calls to action from the session, highlighting the
    critical importance of inclusivity, standards, and adaptability in the governance of AI technologies.

    -by Yug Desai and Micah Nanbaan


    Key Takeaways:
    1. Inclusivity in AI Governance: The discussion emphasized the significance of inclusivity in
    AI governance, both at the national and global levels. Inclusivity ensures a diversity of
    perspectives and enables the development of policies that consider the unique needs
    and challenges of various communities and regions.
    2. Role of Standards and Certification: Technical standards and certifications were
    highlighted as essential tools in addressing AI-related challenges. These standards
    encompass common terminologies, transparency, and certification processes that
    promote the responsible development and use of AI technologies.
    3. Adaptive Governance: The ever-evolving nature of AI technology requires adaptive and
    agile governance approaches. Participants stressed the need to continuously evolve AI
    governance models to keep pace with technological advancements, all while upholding
    fundamental rights and maintaining accountability.

    Calls to Action:
    1. Capacity Building: Encourage capacity building, particularly within the private sector.
    Private organizations should invest in educating their workforce and developing expertise
    in AI to better comprehend its potential risks and benefits.
    2. Multistakeholder Engagement: Promote multistakeholder engagement by actively
    involving governments, civil society, private organizations, and international bodies in AI
    governance. This approach ensures a more comprehensive and inclusive perspective in
    policymaking.
    3. Feedback Mechanisms and Accountability: Establish feedback mechanisms to collect
    input and concerns from various stakeholders. These mechanisms should be responsive
    to address emerging issues, and robust accountability measures should be implemented
    to ensure the responsible development and use of AI technologies.