Check-in and access this session from the IGF Schedule.

IGF 2019 OF #44 Disinformation Online: Reducing Harm, Protecting Rights

    Description

    This session will be a 60 minute discussion on approaches to tackling disinformation.

    This session will consist of short presentations from each of the four speakers, each focusing on a different policy angle. The discussion will then be opened to the floor. The session will focus on the following areas, with suggested questions for panellists below:

    1. Addressing vulnerabilities in the online environment: public pressures, technological solutions and industry’s role.

    • How can technology be used to tackle disinformation? 

    • What role should service providers play in tackling disinformation on their platforms?

    • How can Internet platforms and media outlets work together to fight disinformation? 

    1. Developing regulatory approaches to tackling disinformation while upholding freedom of expression.

    • What is the role for regulation in tackling disinformation?

    • How can regulatory regimes ensure freedom of expression is protected? 

    1. Audiences: Impact, public perspectives on the problem, and the role of education.

    • How can audiences’ resilience to disinformation be increased?

    • Who are the vulnerable audiences?

    • Are audiences informed about disinformation? How do the public perceive the problem? 

    • How can the impact of disinformation on audiences be measured?

    1. Emerging challenges, deepfakes and VR technology: an international approach

    • What are the key emerging challenges in this area? 

    • How should we collectively responding to emerging technological challenges, and those that do not yet exist?

    • How can we respond to wider forms of online manipulation?

    Organizers

    Department for Digital, Culture, Media and Sport, UK Government
    Atlantic Council's Digital Forensic Research Lab (DFRLab)

    Speakers

    Moderator: Jakub Kalensky - Senior Fellow, Digital Forensic Research Lab

    • Damian Tambini - Associate Professor, London School of Economics

    • Miranda Sissons - Director of Human Rights, Facebook

    • Sebastian Bay - Senior Expert, NATO Stratcomms

    SDGs

    GOAL 3: Good Health and Well-Being
    GOAL 4: Quality Education
    GOAL 9: Industry, Innovation and Infrastructure
    GOAL 10: Reduced Inequalities
    GOAL 16: Peace, Justice and Strong Institutions
    GOAL 17: Partnerships for the Goals

    1. Key Policy Questions and Expectations

    The challenge of disinformation: how can we reduce harm and protect human rights?

    Disinformation is a multifaceted problem with no single solution. It is a global issue, with many countries concerned about its potential harmful impact on security, health and societal cohesion. The objective of this panel session is to discuss approaches to tackling disinformation, drawing on international examples and views from government, industry, civil society and academia, and encouraging cooperation and collaboration among partners.

    2. Summary of Issues Discussed

    The panel discussed the increasing issue of disinformation and manipulation online, with agreement that hostile actor tactics are regularly evolving and that, to counter this, diverse partnerships with representatives from a range of sectors is needed. There was significant discussion on efforts by platforms, particularly by Facebook, with recognition that while the platforms had taken significant steps more could be done. The increasing number of companies selling manipulation services online was raised, with some debate on the need to regulate such companies. Participants agreed that it was critical that the impact on human rights was closely considered before action is taken.

    3. Policy Recommendations or Suggestions for the Way Forward

    The debate considered that Governments may wish to explore putting additional requirements on social media platform, including potentially changing the liability of platforms, increasing regulation to set consistent standards or reviewing competition policy. The panellists also consider other steps which could prevent disinformation, including  support for high quality journalism and increased provisions for media literacy. Other issues discussed included whether platforms could take steps like increasing transparency, granting data to researchers, or introducing content labelling to increase awareness of potential source biases. There was wide agreement that the whole community could consider standardising the terminology being used to describe these issues. 

    4. Other Initiatives Addressing the Session Issues

    The discussion highlighted a number of measures that are already being taken, including:

    • A Whatsapp announcement on suing commercial companies who seek to undermine their platform

    • Facebook measures, including the new content oversight board, the ‘remove, reduce, inform’ policy, increased takedowns of inauthentic coordinated behaviour which are then publicly announced, and increasing partnerships with researchers.

    • The International Factchecking Network which is professionalising standards for fact checking organisations

    It was also noted that some countries are introducing legislation to counter disinformation, including Vietnam, Singapore and Nigeria.

    5. Making Progress for Tackled Issues

    The discussion outlined that progress could be significantly improved through increased partnerships between the various communities who are considering disinformation. This should include experts from cyber, tech, human rights, media and journalism. In addition, greater understanding of how users interact with information on social media platforms could be improved, including understanding the cues that that users need to make decisions about the veracity of content. 

    6. Estimated Participation

    Please estimate the total number of onsite and online participants: 140

    Please estimate the total number of women present onsite and online: 60

    7. Reflection to Gender Issues

    There was some discussion on the need to protect human rights on social media platforms, and particularly protecting the right to freedom of speech but also that there should be some recognition of a need to be able to access truthful information. There was no specific discussion on gender.