IGF 2023 WS #572 Workshop on AI Harms and Regulatory Solutions

    Organizer 1: Sasa Jovanovic, Brown University
    Organizer 2: Ayden Férdeline, 🔒
    Organizer 3: Gabriella Ziccarelli , Stripe
    Organizer 4: Matthew Wong, Singapore Consulate to UN
    Organizer 5: Ines Jordan-Zoob, Venable
    Organizer 6: Shreya Nallapati, Neveragaintech
    Organizer 7: Ekene Chuks Okekke, Internet Law and Policy Foundry
    Organizer 8: Ogbeifun Matt, Internet Law and Policy Foundry
    Organizer 9: Nora Grundy, Venable

    Speaker 1: Sasa Jovanovic, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Ayden Férdeline, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Gabriella Ziccarelli , Technical Community, Western European and Others Group (WEOG)
    Speaker 4: Matthew Wong, Government, Asia-Pacific Group
    Speaker 5: Nora Grundy, Private Sector, Western European and Others Group (WEOG)
    Speaker 6: Ines Jordan-Zoob, Private Sector, Western European and Others Group (WEOG)

    Moderator

    Shreya Nallapati, Civil Society, Western European and Others Group (WEOG)

    Online Moderator

    Ogbeifun Matt, Civil Society, Western European and Others Group (WEOG)

    Rapporteur

    Ekene Chuks Okekke, Civil Society, Western European and Others Group (WEOG)

    Format

    Panel - 60 Min

    Policy Question(s)

    A. What is the current landscape of AI risks and harms?
    B. Where do the current technical and policy solutions fall short in addressing risks posed by AI?

    What will participants gain from attending this session? Current discussions of AI risk and harm are fairly siloed or piece-meal, and this session plans on providing an organized conversation of the topics at large, shedding light on shared challenges and opportunities across several AI-related topics. This orienting exercise is particularly important to mitigate the risk of conflation across AI issues as the term "AI" is increasingly used broadly, to refer to anything from automated decision making systems to recommendation systems to generative AI. Additionally, there has been little participation of the youth perspective in the AI debate and attendees can come away with perhaps new insights given the unique positioning of panelists.

    Description:

    As artificial intelligence (AI) advances in serving a range of applications, including safety, privacy, and cybersecurity, it has become a growing concern across global public and private sector stakeholders. The questions this panel seeks to answer are: "What is the current landscape of AI risks and harms? Where do the current technical and policy solutions fall short in addressing risks posed by AI?" The first half of the session will address AI harms currently facing society, effectively providing a taxonomy of the risks associated with AI based on whether the risk is related to social equity, institutional resilience, or economic growth. The integration of AI with other emerging technologies, such as quantum computing, in the context of security issues will be a focus.

    In the second half of the session, panelists and audience members will raise potential regulatory solutions for the AI risks of priority to them. Panelists will debate the advantages and disadvantages of the approach. Discussion may comment on emerging regulatory frameworks from around the world and the feasibility of implementing those frameworks in practice. Panelists will draw from their backgrounds as both digital natives and diverse professional experience from academia, government, industry, and advocacy, to offer paths forward for multi-stakeholder collaboration towards AI regulation that works in an evolving geopolitical and security landscape.

    Expected Outcomes

    The Center for Technological Responsibility (CNTR) at Brown University may use this session as a call for participation for a working group for 2024 on topics pertaining to AI risk, harm, and regulation. This may involve the same stakeholders listed as well as interested audience members.

    Hybrid Format: We plan on leveraging our onsite and online moderators to ensure that equitable participation among attendees and speakers is achieved. The onsite moderator will moderate the first half of the discussion, while the online moderator will moderate the second half of the conversation, thereby ensuring a hybrid presence, especially as most panelists will be in person. The onsite moderator will be in communication with our online moderator throughout the event, ensuring that questions or comments from online participants are incorporated in the discussion. We will use a survey tool like Survey Monkey to crowdsource and ‘temp check’ the direction of the discussion, for example prompting attendees to rank the AI harms that are top of mind for them and their organizations in order to guide our discussion. A Q&A session at the end of the discussion will likewise be facilitated by an online tool.