Check-in and access this session from the IGF Schedule.

IGF 2020 WS #258 Smart but liable: liability in machine-learning applications

    Subtheme

    Organizer 1: Laranjeira de Pereira José Renato, Laboratory of Public Policy and Internet - LAPIN
    Organizer 2: Alexandra Krastins Lopes, LAPIN
    Organizer 3: Thiago Moraes, Laboratory of Public Policy and Internet

    Speaker 1: Verónica Arroyo, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 2: Andrea Renda, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Nathalie Smuha, Civil Society, Western European and Others Group (WEOG)

    Moderator

    Laranjeira de Pereira José Renato, Technical Community, Latin American and Caribbean Group (GRULAC)

    Online Moderator

    Thiago Moraes, Civil Society, Latin American and Caribbean Group (GRULAC)

    Rapporteur

    Alexandra Krastins Lopes, Technical Community, Latin American and Caribbean Group (GRULAC)

    Format

    Break-out Group Discussions - Flexible Seating - 90 Min

    Policy Question(s)

    Among the questions to be discussion we suggest this non-exhaustive list: Who should be responsible for a machine-learning system’s learning outcomes? A developer? Its seller? Its data controller? Should liability over machine-learning systems be extended for how long after the product or service is purchased? How should these systems be developed in order to avoid undesirable learning outcomes? How should rules be designed in order to allow for more explicable machine-learning applications? What legal obligations should developers keep after the product or service is launched to the market?

    Artificial intelligence-based systems have been applied to almost every human and non-human activity. Machine-learning is one of its most used applications: they are capable of predicting behaviours, creating users’ profiles, allowing a car to drive by its own and process human language. However, due to their ability to learn, these technologies occasionally give rise to unpredicted outcomes that may cause damages for consumers. This brings new challenges to the liability frameworks of legal systems around the world. By organizing break-out group discussions, we expect to discuss these and other issues. We also intend to figure out possible paths to protect consumers and allow for effective liability frameworks for machine-learning-based technologies.

    SDGs

    GOAL 8: Decent Work and Economic Growth
    GOAL 9: Industry, Innovation and Infrastructure
    GOAL 11: Sustainable Cities and Communities

    Description:

    This 90-min session aims to debate the issues arising from the development of Artificial Intelligence (AI) based systems and how to establish both technical and legal solutions to address liability for damages. The increase of new AI technologies such as machine learning, which has “the ability to learn without being explicitly programmed”, may lead humanity to incredible social advances but also creates unprecedented concerns on human rights. The session will discuss the concerns on the possible risks posed by these applications by mapping potential issues of AI-based systems and the difficulties to address the liability of developers in this fastly developing context. These challenges may derive from the technical aspects of AI, including the lack of explainability of multiple solutions, or the challenges posed by conflicting and frequently up to date regulatory arrangements in different countries with regards to liability. By organizing break-out groups discussions with the participation of some experts with different views (both technical and humanitarian) in the field of AI based systems, it is expected that valuable conclusions may be reached about how liability rules should be designed in order to keep the pace of AI’s development. The session will be split in three parts. In the first part, the panel’s methodology will be explained, with some brief introduction from the moderators and guest speakers. In the second part, three groups will be formed with people from the audience and which will be led by each guest speaker. The participants will thus discuss one of these topics that deal with different issues and reflect on innovative methodologies to tackle them: (i) technical challenges for AI explainability; (ii) jurisdictional challenges for AI-based applications; (iii) regulations and enforceability challenges. In the final part, each group will name a rapporteur to present their findings.

    Expected Outcomes

    The proposed session shall result in new ideas for addressing the theme of liability artificial intelligence systems. By addressing (1) the main technical challenges AI applications face in aspects such as the explainability of automated decision-making processes, as well as the (2) urgency of updating regulatory frameworks in order to keep the pace of technology development, we expect to achieve a clearer insight on how liability rules should be designed in order to render AI developers, data controllers and sellers liable for the damages to which their applications give rise. The session would also help participants to test their ideas and initiatives among their peers in a participative and inclusive manner, in order to allow for diverse experiences to be shared with one another. The outcomes of the debate could thus be applied back in each of the participants’ communities in order to develop new and more effective approaches on how to regulate AI in their home countries.

    For the first part, the organizers will introduce the methodology and give 5 minutes for each guest speaker to present their view on the topic. In the second part, the organizers will assist the mediation of the groups, rotating between them to promote the debate. The organizers should avoid leading the debate, since the idea is that each group comes up with ideas by itself. The organizers' role is merely to incentivise the discussion. In the third part, the organizers will moderate so that the groups’ representatives can present their findings.

    Relevance to Internet Governance: In accordance with the Tunis Agenda for the Information Society, the Internet Governance shapes the evolution and use of the Internet, which makes relevant to dialogue about the regulatory challenges of Artificial Intelligence (AI) usage in Internet Governance Forum. It is fundamental for the society to take advantages of all Internet benefits and to that end the appropriate regulatory framework needs to be put in place. In order to render AI-based systems to be safe and ethical, legal and technical standards should be developed in order to allow for its sustainable development, promoting inclusion through responsible innovation.

    Relevance to Theme: Addressing liability artificial intelligence-based systems is relevant for the "Trust" Thematic Track, since it relates to addressing issues on safety and security of people due to a rapidly developing industry that impacts society widely. The collaboration to regulate the topic in a multistakeholder approach provides the tools to protect digital and human rights and establish proper liability without prejudice to the innovation and economic development.

    Online Participation

     

    Usage of IGF Official Tool. Additional Tools proposed: Streamyard for online moderation in youtube.