The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MOLLY LESHER: Just testing that everyone can hear me?
>> SANDER VAN DER LINDEN: Yes. I'm sharing my screen now.
>> MOLLY LESHER: If everyone can test their mic.
>> MARK UHRBACH: Yeah, I'm fine, Shelly.
>> Yes, I can, perfect.
>> Worthy? I see you there.
>> Yes.
>> MOLLY LESHER: And Pablo, I see you, and Hanna. Pablo, you're not able to use the mic?
>> PABLO FERNANDEZ: Yeah, now I am. Thanks, Molly.
>> MOLLY LESHER: Super. Okay. I'm sorry it took so long to log in. I'm sorry.
>> PABLO FERNANDEZ: No worry. No worry. Thanks.
>> MOLLY LESHER: So I think we're waiting, then, for Julie. And I think that's it on mine. Maybe we'll just ‑‑ it's not yet 11:50. We'll just wait another minute or two.
>> MOLLY LESHER: So hi, everyone. I'd like to welcome you here today. I'd like to welcome those who are on site at Addis Ababa. I was supposed to be there, and I'm very sorry that I couldn't make it in the end, but I'm happy to be with you online. My name's Molly Lesher, and I'm an economist. I work at the organization for economic cooperation and development in Paris, and I've been working on trying to measure and analyze digital transformation for quite some time. It's a great pleasure to welcome you to the workshop on fighting the creators and spreaders of untruths online. While the dissemination of falsehoods is not a new phenomena, the Internet has reshaped and amplified the ability to create and perpetuate content in ways that we're only just beginning to understand.
Such inaccurate and misleading information can intensify social polarization. We're seeing that in a lot of our countries, erode public trust in democratic institutions and harm people in society more broadly. And as we've seen in the war in Ukraine, unfortunately this type of information can be used in new and deadly ways. 21st century wars are being fought on the ground, of course, but also now in cyberspace. So I just wanted to, before we start, think a little bit about why this is important. And why do we need access to real, true information? Kind of the main reason we see here at the OECD, it's about fundamental rights. It's the right to freedom of speech, thought, and expression, coupled with the free and independent press, the universal declaration on human rights, grants citizens, the right to choose leaders in pre‑fair and regular elections as well as the right to access to accurate information from parties, candidates, and other factors that may influence voting. The right to health. Here we can think about COVID. We all heard a lot of mis and disinformation about COVID but also other health dangers like tobacco smoking. And false and misleading information has unfairly interfered with the right to privacy and data protection of the users of online platforms.
Now, beyond fundamental rights, there are other reasons that we need accurate information, why it's important. Key issues include information about climate change. This is timely in the aftermath of COP27, including its causes and impacts, as well as conspiracy theories related to emotional events such as the origins of the 9/11 attacks, and hoaxes of many different types. Now, the OECD, which is where I work, has been looking at this issue from different perspectives. My colleague, Hanna Pawalec, who is on site in Addis Ababa, and I started to think about measuring this phenomena. And the more we looked into it, the more confusing it all seemed to become. Academics, research, and the popular press used a cocktail of terms to describe the different types of false and misleading content that circulates online, and it made it hard to think about measuring something for which there seemed to be no agreed‑upon definitions.
So we set about to produce taxonomy and the collection of false and misleading content online. And this type of information assumes different forms based on the context, source, intent, and purpose. It's important to distinguish between the various types of untrue information to help policymakers design well‑targeted policies and to facilitate measurement efforts to improve the evidence base in this important area. And I just want to kind of underline that last point. We really lack severely a good evidence base in this area.
We identified five different types of what we call untruths with corresponding definitions, largely with the view to try to help measurement efforts, and we can put in the chat a link to the paper we produced on this topic. And these definitions support a typology of false and misleading content online that can be differentiated along two main axes. One is the intent or not of the information disseminated or spreader to cause harm, and the second axes is the degree of fabrication of any by the creator of the information content. That is altering photos, writing untrue articles, creating synthetic deep fake videos. And this distinction we feel is important because the remedies will likely be different, whether the creator and spreader really intended to harm the people with the content.
Now, as I mentioned before, we're actually right now thinking about how to use this taxonomy to develop cross‑country comparable indicators of some facets of this great challenge to our societies and to democracies more broadly. Now, today we have an absolutely fantastic lineup of panelists from a very varied range of backgrounds and areas of expertise to help us kind of think through this complicated issue and to try to identify perhaps some potential solutions that are emerging.
We'll start with the keynote speech. We'll have a short Q&A after that. And then we're going to move to a panel format where there will be time for questions from the audience on site as well as online. So please do ask your questions in the chat. We will be collecting them to pose to the speakers in due course. So without further ado, I'd like to introduce our keynote speaker today. That's Mr. Sander van der Linden. Sander is a Professor of social psychology and society in the department of psychology at the University of Cambridge and Director of the Cambridge Social Decision‑making Lab. He's published over 150 papers. He's very well known in this area, and he also has a new book that's coming out very soon called "Foolproof: Why misinformation affects our minds and how to build immunity." So Sander, I'd like to give the floor to you and let you share your screen.
>> SANDER VAN DER LINDEN: Thanks so much, Molly. Yeah, thanks so much for inviting me. Let me share my screen. And can everyone see this? Is this okay?
>> MOLLY LESHER: Yes.
>> SANDER VAN DER LINDEN: Perfect. Yeah. So, you know, I would like to start by very briefly talking about the problem and then diving into some potential solutions that we have been exploring through this idea of psychological and occupational that I'll come back to.
I'm sure I don't have to reiterate the harmful consequences of misinformation to people on this call. Here in the UK people have said cell phones on fire, over 50 of them between false links between 5G and the spread of the coronavirus in Iran, hundreds of people have died because they were ingesting methanol, including children. In the U.S., people have been proclaiming that the vaccine changes your RNA. One of my favorite ones, of course, here in the UK is this video which actually purports to show that the needle disappears when you get the vaccine. So, you know, it's a self‑disappearing needle. Then, of course, with the Ukraine/Russia war, we've seen a huge amount of disinformation. This is a shallow fake that purports to show a military convoy heading to Ukraine but, in fact, this is California. We've seen lots of disinformation coming from Putin on the war, not to forget the Capitol riots in the United States which were partly fueled by misinformation. And so I think, you know, even though some research shows that not everyone's affected by misinformation, the consequences can be quite severe.
And I think as Molly was just saying, you know, when you look at the field of research that I'm in, people use different definitions of what's misinformation and what's disinformation. And at least for the research that we do, a lot of people look at fact‑check articles to determine what's fake and what's true. Other, you know, means of doing this are looking at the source. So the producer, because there's lists and, for example, news guard rates the trustworthiness of different sources, and so we can say, well, this source is trustworthy and this one isn't. But both of those come with some problems in terms of trying to target a stable definition of truth. So let me give you an example.
So this is an article that's fact checked, that vitamin C doesn't protect against coronavirus. Health impact news. So by the first two definitions, this would be a false article. However, things get a bit more tricky when you look at news like this. A healthy doctor died two weeks after getting COVID vaccine. CDC is investigating why. Now, the Chicago Tribune is a reputable outlet. So, you know, based on the source, there's nothing dodgy going on here. In fact, based on the headline, the headline itself is not wrong. A healthy doctor did die two weeks after getting the COVID vaccine, and the CDC is investigating why. But what's being used here clearly is a manipulation technique to try to frame this article in a way to get people riled up about the vaccine. And, in fact, this is not an exception. This was the most frequently shared news story on Facebook during the first quarter of 2021.
So a lot of our research, we're not so much focused on what's fact checked as true or false, but we're focused on media manipulation more generally, and that's the definition that I maintain in this talk and in our research that it's about the presence or absence of common misinformation techniques. So we don't necessarily want to promote binary ideas about truth. You know, as we've seen throughout the pandemic, the signs, spoken of understanding of what's true or not. For example, you know, the effects of Ibuprofen on COVID‑19, you know, first they thought it was damaging, and then there was the consensus that it wasn't. So, you know, sometimes as our scientific understanding evolves, so does our definitions of what claims are true or not.
So we figure a better way to do this is to actually point people to the cues that are used to produce misinformation more generally and help people calibrate their judgments in terms of how accurate or how manipulative a certain piece of information is. So that's kind of the framework in which we're operating.
Now, the tools we have at the moment that are used are fact checking and debunking to correct misinformation. Very important tools. You know, I fully endorse these tools because they're very important, but they also come with some limitations. And one of those limitations that the literature has identified through lots of meta analyses is what we call the continued influence of misinformation. So once people are exposed to a falsehood, it continues to linger in our memories even when people acknowledge having seen a correction.
Now, the explanation for that is there's two kind of neurological explanations for that. One is that people fail to correctly link the fact check to the myth in their memory. So they're not properly linked. And the other is called a retrieval error, which is that people fail to ‑‑ they connect it, but they fail to correctly retrieve the myth alongside ‑‑ or the correction alongside the myth from memory. And this is because the myth has become more prominent in memory than the correction. This happens when you keep repeating the myth, it strengthens the connections that the myth has in your memory with lots of other things that you know. And when the correction forms a more minor part of the fact check or the debunk, typically what it does is it strengthens the myth because you keep repeating it, and people fail to adequately retrieve the correction, right? So either people fail to link it in the first place or there is an error in the retrieval process. Now, there are ways to try to optimize for this, which is to really make the correction as prominent as possible and try to avoid talking about the myth altogether. But the problem with debunking is that you are forced into a rhetorical frame where you now have to repeat the misinformation in order to debunk it.
So to try to get around this issue, plus the fact that, you know, practically it takes two seconds to produce misinformation and, you know, detailed fact checks sometimes take weeks or a few days to produce. You can't fact check every single piece of misinformation that's out there. So to try to prevent this from happening, we focus on this idea called pre‑bunking, which is the opposite of debunking through a process known as psychological inoculation.
So what is inoculation? Inoculation follows the medical analogy exactly. It was first introduced by social psychologists in the '60s, which he labeled a vaccine for brainwash, though he never tested it in the context of misinformation, so that was interesting. The analogy stands. The idea is just as with the regular vaccine where you inject people with a weakened dose of the virus or an inactivated strain of the virus to trigger the production of antibodies to help induce resistance against future infection, it turns out you can do the same with the mind by preemptively exposing people to weakened doses of misinformation or the techniques used to spread misinformation, people can build up cognitive antibodies over time. So the idea really is that, you know, when you get a vaccine, it's all about showing your immune system examples of the pathogen that's threatening to the body, right? The more examples your immune system sees, the stronger they can mount an immune response. And it's really the same with the mind. The more examples your mind of what misleading content looks like, the better it is in encountering it.
Now, the important thing here is not to overwhelm the psychological immune system by duping people with misinformation. You want to show them weakened or inactivated strains of the virus. And that's why it differs from fact checking, right? We're not telling people what the facts are. We're giving them a weakened dose or a weakened simulation of the misinformation instead in how to potentially refute it.
Now, the psychological variant includes a forewarning, which is important to jump start the psychological immune system. People are busy, they are not paying attention. What we know from people's attention span is that if you tell them that there's people out there trying to deceive them, it triggers what we call deception monitoring. So people become more likely to monitor information and receptive to whatever you're going to be saying next, which is the pre‑bunk, which in the literature is called refutational preemption. You try to refute a future falsehood. It's a bit difficult, so we just termed it pre‑bunking.
Now, initially the news came out with these articles, Cambridge scientists considered fake news vaccine possible to vaccinate Americans specifically according to this article. One journalist even called us asking if this was fake news. So, you know, we asked people to read the article and see what they think.
So I won't have time in this talk today to go through all of the research that we've done in the past. We started out in the context of climate change. So, you know, we warned people in advance that there's political actors trying to deceive them and that they use specific techniques to try to sow doubt on climate change. Then we gave people what we called the microdose of giving them examples of what this deception looks like. Then we tested them in the lab later on with the full dose, and we found that people became more immune. Not full immunity, but people became more resistant.
There's lots of review articles we have in the literature for people who are interested in reading about this idea of inoculation theory and the lab studies that we've done. I'm just showing you a few here. But at the end of the day, these were lab studies, right? What I'm interested in sharing with you is our latest fieldwork. We took some inspiration from the Harry Potter novels from Professor Snape who basically said your defenses must be as flexible and inventive as the arts you seek to undo. You know, people don't come into the lab and read a 600‑word pre‑bunking essay in order to get their immunity. So we wanted to turn this into a real‑world intervention, so we produced a game called Bad News.
Bad News is a real‑world social media simulation where people are exposed to weakened doses of the larger strategies that are used to produce misinformation. I'll show you those in a second. This was a gamified intervention that millions of people were exposed to, and we could evaluate in a larger scale. We did some games during the pandemic with the United Nations and the World Health Organization called Go Viral which is based on the same principle. During elections with political misinformation, with the Department of Homeland Security in the U.S. And, you know, here you see basically what happens in the game. You know, you make use of weakened doses of the strategies. This is impersonation. So you are impersonating Donald Trump, for example, here. You can see the Twitter handle is manipulated. Most people don't notice this the first time around. It's very interactive, so people interact with you. You create your own echo chamber, and you learn about some of these tactics. Polarization, impersonation. Impersonating doctors, celebrities, politicians, floating conspiracy theories, trolling, fearmongering, and so people are inoculated against all of these strategies.
Now, we had lots of papers kind of evaluating this approach and how the long‑term effectiveness of it. But we still weren't quite there yet in terms of scaling this approach and implementing it. So it turns out not everyone wants to play a game. And so what we did is we started building computational models of how you could actually scale this, kind of like an epidemiological model of what happens if enough people in a network are inoculated against misinformation, do you get herd immunity?
We had some positive answers there, but it really depends on how far you can scale this. So we decided to team up with Google and test this in the wild. And these are some of the latest results that I'm excited to show you today. So what we did with Google is Google said, look. These games are fun but they're 20 minutes, 15 minutes on social media, you know, you have a minute with people. 30 seconds. So we needed to do something that we could scale. So we created videos. These videos follow this inoculation script very closely. So there's a warning that people may be targeted with misinformation. Then they are exposed to a weakened dose of some of these strategies. For example, scapegoating minority groups. So scapegoating is a technique that was used a lot during the pandemic. Think Donald Trump saying Chinese virus, for example. And then people get the microdose or the examples of how they can, you know, examples of how this occurs in the wild, and that's sort of the inoculation process.
So we recently published this paper which had six lab studies and one field study. I'm not going to go through all of these studies, but I just want to give you the basic gist of how these studies work and what our findings are. So basically people were randomized in either the inoculation group, and they were shown one of these videos for about a minute and a half or a control group, and so an unrelated video. I think it was about freezer burn. And then they got items to rate, either manipulative or not manipulative, and then they went through the rest of the survey.
I don't have time to show the videos here, but let me explain the concept to you. It's about 30 seconds. They're fun, they're animated. And what they do is they look at some of these techniques in a completely kind of inactivated context. So when you work with social media companies, they are very hesitant to touch on real‑world issues. They don't want to take any risk. We have a video on false dichotomy. Google owns YouTube, right? So they identified that, you know, on YouTube the problem is not so much that there is news headlines because they're videos on YouTube, right? But there's a lot of political gurus using misinformation strategies to get people into more extremist modes of thinking.
So, for example, they will use emotional language or fearmongering or they will present false dichotomies to people or, you know, they'll make use of fake experts or conspiracy theories. And so the false dichotomy, an example of that is either you join ISIS or you're not a good Muslim, for example. So there's obviously more than those two options. But when people in the moment are targeted with a false dichotomy, it tends to be, oh, yeah. Okay. Maybe that makes sense. And so what we try to do in these videos is pre‑bunk these techniques.
So what we do is we show people a clip from Star Wars, Revenge Of the Sith. This is Obi Wan‑Kenobi and he says either you're with me or you're my enemy. He says only an Sith deals in absolutes. This is a completely safe concept, but it's the exact same false dichotomy that's used in real‑world sort of issues.
So that's the weakened dose that we present to people.
And then we test them with actual items. So why gave illegal immigrants access to social services? We should help homeless Americans instead. Now, the non‑manipulative version of this would read, this is actually taken from social media, can we address both the immigrant access to social services and our domestic homelessness problem, right? There's no reason why you can't address both, so this is a false dichotomy. Politicians are doing important work, but they're really parasites. They feed off others. They create no wealth of their own. This is the scapegoating technique. So here they're scapegoating politicians. And, again, we're not concerned about true or fake but about these sort of toxic conversations that are happening online that make use of these manipulative strategies. Okay.
So here's some of the results. I'll end soon. For emotional language, incoherence, false dichotomies, scapegoating, you have here discernment, which is a variable that subtracts people's ability to detect real news from fake news in this context. And so we want discernment to go up. So you can see here that people are better able to recognize these techniques. Here's the fake news ‑‑ or the fake technique recognition and discernment. People have higher confidence in their abilities. They find it more trustworthy. And for most of the techniques, not all of them, they indicated lower willingness to share.
So this was here are the effect sizes if people want to see it. It's a metric of the effect size. I'll show you a different effect size in a second. People said this is great, but what about on social media? So we've got YouTube to actually allow us to test this approach in the wild on YouTube. You know when you get those annoying ads on YouTube and you can't skip them? This is where our pre‑bunking video would be entered. And what happened was that people were randomly exposed. So we had about a million ‑‑ in campaign with about a million views on these videos on YouTube. 30%, they allowed us to randomly select 30% for one survey question on the YouTube platform within 24 hours of being exposed to the video. The median exposure time was about 18 hours. So it was not directly after. So in this ad space, you were exposed to the video. And then within 24 hours, you've got one of those surveys on YouTube.
Interestingly, they only used those surveys for companies for brand recognition, but we were able to hijack this for scientific purposes to actually see if people can now identify these techniques. And can you see an example here. This was during the World Cup. You can see the countdown of these ads and the surveys. So what did we find? In the wild, on YouTube, when people are, you know, distracted, we found that these videos boost people by about 5% in their technique recognition. That's much smaller than in the lab, but it's still, you know, it's still pretty good. If you look at brand recognition numbers, companies are generally happy, you know, with a 1% boost in brand recognition. They spend millions of dollars on that. So a 5% boost was pretty favorable. And YouTube is now rolling out this approach on a larger scale with Google, for example, on the Ukrainian immigration crisis.
So what they're doing here is they're modeling the inoculation process in a video. So these are available in different languages. So, again, rather than a fact check, these are people having a conversation about how you can inoculate people at home. What happens here is they say, oh, there are all these refugees from Ukraine coming to Europe. They're going to steal jobs. It's going to be terrible. And then the other person says, hey. Do you know about this technique called scapegoating? It's when you pick a group of people, usually a minority, and you blame them for all of our problems. And this is a technique that's common in a lot of misinformation, and then they have a conversation and inoculate each other against this technique. And it also models how people can do this at home. We just have a guide out today with Google and BBC, a practical guide to prebunking, if people want to implement this. It goes through all of the steps, you know, what misinformation do you want to prebunk? Who's your audience? What are the right measurements for you? So it's a very detailed sort of practical guide.
We have one with NATO as well for, you know, during conflicts. I'm not going to go into the Ukraine example, but there is some information on that. Twitter was prebunking until they all got fired by Elon Musk. We are doing some prebunking with Facebook at the moment around climate change in very much the same way. And the last thing I'll say, inoculation.science is where you can find all of our materials for free. It's all open. And my own propaganda, if people are interested in this research, I have a book coming out Called "Foolproof," in which all of it is described. Sorry I went a little bit over, but hopefully it was useful. Thanks so much.
>> MOLLY LESHER: It was great. Thanks so much, Sander. It's always fascinating to listen to you and all of your great research you've done. There's a lot to unpack in that. And I think I want to turn to the audience, although one thing that really stuck in my mind was, you know, your comment at the beginning, once you see a falsehood, it lingers in our minds even if you've seen a correction, which is a really, really horrible thing to think about as we try to, you know, address the negative effects of this type of content online.
I am not on site in Addis Ababa, although I see a very full room. My colleague, Hanna Pawalec, is there. I'm going to pass to her if there are any comments in the room for Sander.
>> HANNA PAWELEC: Just raise your hand if any of you have a question. If you are sitting behind, you will need to go closer to the microphones. And yes, please.
>> Thank you. Thank you, Sander. What I raise in the question is when these kind of studies took place, how are cross‑cultural issues addressed? And the next is I think truth is relative. So how these facts are truths and not truths, try to check and try to control and try to have universal consensus on the global connection and the global Internet connection and Internet community? This is my question. Thank you.
>> SANDER VAN DER LINDEN: Yeah. So the global question, it's a very good question. So some of these prebunks are being adapted in different languages. But it's not so straightforward because some of the jokes, some of the humor, it will be different in different countries. And so it's not just a matter of putting ‑‑ we've started to put subtitles in but it's not ideal because we want to adapt it to different cultures.
That's why we team up with people with expertise. We worked with the BBC Media Action Lab. They do a lot of research in the global south, for example, also in Africa. And so they are able to work with local partners in different continents to adapt this this approach and work with local universities and organizations to make sure that it fits the right context. We are doing something similar with Google. With our games, we team up with local organizations. So we work with partners in Ukraine, for example, on the Ukrainian and Russian versions of our games. We have versions in pretty much every language at the moment for the games because we work with local partners.
But, you know, studying the global efficacy is a good question. So in a lot of Western European contexts, we find that the results are pretty similar. But then, you know, we did one intervention in rural India, for example, that didn't really work out as we had hoped because the digital literacy levels were very different than urban areas. So we really have to adapt our interventions. I would say ‑‑ and this is also what we mentioned in our recent guide ‑‑ that you do really have to carefully adapt these interventions for different cultural contexts and test them and maybe even pretest them sometimes. That's what we tend to do to see if they're appropriate. And then to answer your second question quickly, which I think was about truth is relative. You know, you can do issue‑based prebunks. So, you know, you inject people with the facts beforehand, and it's about a specific fact. And you take a stance and you say what the truth is. That is usually based around some scientific consensus. That's possible.
In scaling this, we have taken the approach of not talking about specific facts or claims but rather just inoculating people against these underlying tactics that are ‑‑ that have been used for over 2,000 years in Aristotle's time, and they're still used today. Thing like scapegoating, false dilemmas, conspiracy theories, polarizing groups.
So we typically don't tell people what they need to believe or what the truth is. We just tell them these are the techniques that are used across all sorts of information to mislead people, and we help people spot those techniques. And we find that that's a very nonpolarizing way to get people to think about this idea of prebunking. But, you know, sometimes you have to do an issue‑based thing. So, for example, during elections, you might have specific issues that need to be addressed. But, yeah. I agree, if you can, we kind of want to sidestep that truth debate.
>> HANNA PAWELEC: Yes. First maybe we can take both questions, and then we'll have time for both questions.
>> Sure, thank you. Head of strategic communications for the government office of Latvia. First I wanted to say hi to Molly. We met in Paris at the OECD. I really appreciate your work and I also appreciated the work of Sander and his team, especially in a game sense. We have adapted two of the three games you mentioned, in Latvian language. To this end I wanted to also share our experience and our observations, that while one game, particularly Go Viral, it did very well. It was picked up by the audience very well. It was played very well. And we saw very good results. Last year when COVID was still a thing, at least in Europe, and, however, this year when we were in the run‑up to our parliamentary elections, to our general elections, we also made the decision to adapt Harmony Square. And this game somehow was not really picked up. So this actually comes to my question.
Whether in your career, in your observations you have observed that one of the three games is doing better than the others or some promotional tactics of the games maybe work better than others? Could you elaborate a bit on this if you have anything to share? And while I'm here also, I'm a big fan of your work on the games and short videos. I just want maybe to have a sneak peek, what else is cooking in the kitchen, so to say, behind the scenes?
What's the next thing after short videos, after games? Thanks.
>> SANDER VAN DER LINDEN: Yeah. Thanks a lot.
>> HANNA PAWELEC: Maybe the second question, and that will the end of the questions from here in the room.
>> Thank you. And my name is Owen Bennett. I work with Ofcom, which is the UK communications regulator. I have two specific questions around deployment. One is does your research suggest whether this model of prebunking could be deployed beyond just pure disinformation? Think particularly about harmful content to children, like self‑harm content, or is it just a thing where it's the fact of the matter?
And secondly, in the collaborations you've had with online service providers, have they raised any questions about legal or regulatory risks which would make deployment more difficult? And I ask that because obviously for this to work, they have to effectively be in some respects promoting disinformation. So I'm curious whether they have any concerns about that. Thank you.
>> SANDER VAN DER LINDEN: Yeah. Those are great questions. Maybe I'll start chronologically with the first. Yes, and I remember John may have worked with the Latvia team also to do those translations. That's great. I mean, we love getting feedback on this also. We've ‑‑ I don't know why Harmony Square was less effective. You know, my sense is we are making some slight changes, in fact, to Harmony Square at the moment to try to update it a little bit. We have generally received very feedback about Go Viral during the pandemic in different countries.
So that seems to work well. We tried to improve our interventions, you know, when we get feedback in some cultural contexts they were less popular. It also depends on how much investment there is with the rollout. And I will say it also generally just depends on the partner. So, you know, Bad News, which was our original intervention which was not done with any partners except the university, remains one of the most popular ones and the most, you know, continually evaluative ones. But, you know, we have noticed that when we team up with external organizations, there can be a bit more criticism sometimes because people have conspiracy theories about the WHO, for example, or they have feelings about the government. So Harmony Square was done with the U.S. State Department. A lot of people, when they don't like the party that's in power, people are going to be distrustful of anything coming out of the government.
And so we have to deal with this added layer of who is delivering the inoculation. And so we've been experimenting with, well, maybe when, you know, actors that are contested in some form, maybe there's better, quote/unquote, virtual needles or better ways of delivering the intervention by other parties that resonate more across the political spectrum. And so we've been thinking about, you know, who is the ideal person or organization to kind of deliver an inoculation? So I'm not sure how Harmony Square was promoted. What I'm saying is that sometimes we can feel that there's less, you know, less take, less uptake in the interventions if, you know, people are skeptical about the source of who's distributing it. And so Bad News is very popular because it's, you know, very organic and not promoted by any organization. Then again, Go Viral did really well despite some conspiracies. And for us it's also sometimes hard to know what makes a cultural adaptation effective and when it's not so effective. But, yeah, it's very useful to receive that feedback. On new direction, very quickly, we feel we have enough games now so we're trying to optimize them. Based on the feedback, we try to optimize them and do the translations, right. Same with the videos.
The next thing I think we're thinking about are deep fakes. So how can we inoculate people against deep fakes.
So deep fakes, they are there but they're not super prominent yet. But now is the time to start inoculating against deep fakes. So that's what we're working on. And then, too, is what we can do for other platforms like TikTok. We have videos but they're not going to be suitable for TikTok. It's just a different way of how people interact. So we're trying to think of how to engage that type of generation on TikTok and how to do even shorter, more ‑‑ even more scalable versions. So those are two things that we're working on at the moment.
Then for the second gentleman from Ofcom, self‑harm. You know, this approach lends itself well to any area where you can discern techniques. So, you know, extremism and radicalization, for example, because they are recruitment strategies. They are stages in which people are recruited into a cult or extremist organization. We can break those down and weakened doses and refute them. That's why it works well. It doesn't work so well when there's no systematic approach we can break down or techniques that can be identifiable. So I don't know enough about the self‑harm online space and how that happens. But, you know, if there are systematic ways in which people are exposed to content that leads to self‑harm that you could break down into tactics that people can be inoculated against, I think it could be useful. But if not, then, you know, sometimes it doesn't lend itself in a very straightforward way. If there's issues, in terms of the issue‑based prebunk, if ‑‑ obviously if there's facts around this that you can use, that also works. But I don't know enough about sort of self‑harm ‑‑ the self‑harm literature to be able to say off the top of my head whether this is going to work or not. But we are teaming up with the BBC and doing a youth campaign to try to promote this idea among teenagers across a variety of issues in a big kind of developmental campaign to see if we can prebunk toxic content more generally.
Platform providers have legal issues, yes. So the reason why ‑‑ when we work with Facebook or Google, the reason why we work with them is because they don't want to do weakened doses of actual controversial issues that you sometimes have to do. So that's why we created this sort of completely inactivated strain. So the weakened dose people get in our work with social media companies are clips from The Simpsons, Star Wars. We're not actually exposing them to a weakened dose of the disinformation because social media companies identify that as a potential risk to them, and that's why we've taken ‑‑ came up with that approach, which is basically risk‑free. So when you do the prebunking, I think there's various strategies you can take. You can do a weakened dose of the actual misinformation. You can step back and focus on the strategies, or you can use a completely neutralized dose if you really need a complete risk‑free sort of strategy. So those are the layers that we've come up with. I see from Molly that I need to silence myself now. So thanks so much.
>> MOLLY LESHER: I could listen to you all day, but we've got so many other great speakers I've got to get to. But thank you so, so much for that. Just to plus one on the whole translation issue. That's extremely complicated translating something into Hebrew. I've had three professional native speakers translate the same thing three different ways. It's extremely complicated. So thank you very much. We'll move into the panel part of our workshop. And we'll hear from all of our panelists before moving to questions from the audience. So please do stay tuned and save up your questions.
First up, we have Julie Inman Grant who is connecting very late at night from Australia, and we're so grateful to have you here. And thank you for doing that. Julie is Australia's eSafety Commissioner. And in this role Julie leads the world's first government regulatory agency committed to keeping its citizens safer online. And Julie, from the perspective of the Australian eSafety Commission's work, you have developed many innovative practical approaches, programs, initiatives. We've talked about some of them before to address online harms and risks. In your experience, it would be great to hear, you know, what works and what doesn't. Do you see any differences, any impacts of the interventions on children versus adults? Is it getting to a question we see in the chat, as well as women. So, Julie, we're happy to hear from you next. Thank you.
>> JULIE INMAN GRANT: Great. Well, thank you so much for having me, and thank you for the fascinating presentation, Sander. I'm going to take a somewhat different approach. No‑brain science, and hopefully no‑brain overload. But certainly countering the negative and making way for the positive is really at the heart of our mission as the eSafety Commissioner. And, of course, our overall goal is to help keep Australians safer and having more positive experiences online.
So with the focus of our workshop today on fighting untruths online, it really draws out an essential aspect of the challenges faced by regulators, service providers, as well as people around the globe. Now, it's been said that the Internet is nothing more than a reflection of society. A mirror to the world that we live in. And if we don't like what we see reflected, perhaps it's society we need to fix. Not the mirror. So after all, we're not born with racist misogynistic or extremely views. We learn them. But there's no doubt that the power of the Internet is magnifying and further entrenching these points of view. And we can and should acknowledge the many benefits that online services provide in our day‑to‑day lives. But I think we'd all agree that there are also costs.
And in the digital age, it's far easier to produce and disseminate harmful and misleading content and to spread untruths faster and further. And it's this misleading content that is drawing some users into this online house of mirrors where they can't tell the difference between truth and illusion. And this is where I really hold fears for the future world of VR, AR, and MR where the real and the virtual are designed to be blurred. And in the area of deep fakes, as Sander mentioned, GANs and generative AI, where we will truly not be able to discern what is real and what is fake through the naked eye, and we know that detection tools are lagging behind the significant advantages in these obfuscated technologies.
So another question that's difficult to answer is how would you (?) Interference and how much is due to algorithms promoting negative content to make platforms sticky? Knowing that outrage and conflict sells, and really how much is a true reflection of deeper fault lines across society?
So as eSafety Commissioner, I'm particularly concerned about online untruths and abuse being used to silence brave voices who could add diverse perspectives to our public discourse. Now, of course, while free speech is vital to the health of any modern democratic society, there's been a lot of talk lately about free speech absolutism, and certainly if you allow targeted online abuse and harm to proliferate, silencing all marginalized voices and restricting online participation by those who are targeted, I think we all suffer.
Now, eSafety's research as well as the seven years of complaints data we have in interfacing with the public on a daily basis demonstrate that online harassment is intersectional. And it disproportionately targets women, First Nations Australians, people with a disability, and those who in the LGBTQ+ community and those of culturally diverse backgrounds.
And while more overt operations design to shift elections have been in the public spotlight, one of our growing concerns is with the escalation of online information operations. So highly organized but subtle campaigns targeting influential individuals with systemic but diffuse trolling. And, again, this is designed to intimidate and silence. And we know with pylons in this kind of coordinated information communications, it's the aggregate harm that impacts people over time.
So in Australia, we have recently seen this play out with state‑based actors targeting researchers and human rights activists. And so putting the online and offline safety (?) At this intersection of harms where online abuse, misinformation, and disinformation meet. We were set up seven years ago when the Australian government recognized the growing risk of online harms and established the eSafety Commissioner as the world's first dedicated online regulator. And so in regulating digital platforms and services, we work closely with other regulators as well like the Australian Communications and Media Authority, which currently oversees Australia's voluntary disinformation and misinformation code. But our efforts to keep citizens safer online are focused through three main lenses. Prevention, protection, and what I call proactive and systemic change. Now, our work to prevent online harms occurring in the first place is supported through research and building that evidence base, education and awareness‑raising programs. So from a young age, we aim to give our citizens the critical reasoning skills they need to discern the real from the fake, avoiding risky online situations, knowing how to seek help, and to develop the digital confidence to be safe, resilient, and positive participants in the online world. We often talk about the four Rs. Respect, responsibility, building digital resilience, and critical reasoning skills.
And we base this approach on a growing body of evidence that we can ‑‑ so that we can deliver fit‑for‑purpose programs that are responsive to the needs of diverse groups and communities. So, for example, we've seen the chilling effect of online violence on political ambitions engagement of women and girls, decreasing the (?). So we have a program called Women in the Spotlight or WITS, which seeks to redress this imbalance, it provides advice and support along with social media self‑defense training that empowers women and girls to stay online and participate in democratic processes.
We also know that meaningful societal challenge does take time, and until the online winds of change really start to blow, people suffering online harm will continue to reach out for help and protection. So it's worth mentioning under Australia's Online Safety Act which came into play this January, we operate a number of world‑first schemes to protect citizens from online harms which also protects their ability to safely participate in our digital democracy.
So every day eSafety investigators are helping Australians experiencing serious online abuse from child cyber bullying to the sharing of intimate images online without consent to our new adult cyber abuse schemes which also covers cyber stalking. All of these are designed to protect individuals from menacing and harassing content intended to inflict serious harm because that's how harm plays out on social media, dating, and gaming sites.
We also work to combat illegal and restrictive content such as child exploitation material, material that promotes or advocates terrorist acts. So through these schemes, we support individuals by asking in some cases compelling through the use of our legal powers social media platforms and websites to take down abusive and harmful content within 24 hours, meeting a specific legislative threshold. We do not assess the content for truth or falsity. We do not adjudicate defamation or harm to reputation. And we're not set up to proactively police the Internet.
Instead we operate as an important safety net. Most of our schemes are report‑based and require a complaint first to the online service. And then if it is not actioned and we know that context and volume means a lot of reports fall through the cracks. So people with come to eSafety so we can advocate on their behalf. But we've also given game‑changing tools to target systemic safety issues as part of our focus on proactive change. Industry associations have recently submitted new codes designed to regulate the availability of illegal content on online services. And I'm courage considering whether or not these codes should be registered and whether industry standards should be determined.
And these new codes or standards will work hand in hand with our powers under the basic online safety expectations to help create an umbrella of protections for Australians online. We're currently reviewing responses for the first round of legal notices. We've issued under this law to Apple, Meta, Microsoft, Snap and Omegle asking what they are doing or not doing to protect their users from online harm. Particularly high‑risk, high‑impact harms, including child sexual exploitation.
So this is a potent transparency and accountability measure, considering that most of the transparency reporting we've seen from industry to date has been what I think might be characterized as selective and uneven. Because I think if we're really serious about really moving the dial towards a safer and a more civil online world, we really need a revolution, not an incremental evolution. And the catalyst for this revolution must be a renewed focus on safer product design. So just like the safety and design standards set down for industries like car manufacturers or consumer goods, we need similar rules and standards for the technology industry also.
So eSafety has been leading the global charge to shift responsibility back onto the tech sector for putting user safety at the core of product design and development, by assessing risks and embedding protections up front with the help of our safety by design principles and risk assessment tools.
We're also continually scanning for new and evolving online threats, putting this under the microscope so we can strengthen our response because we know that technology is always going to outpace the law and policy. So we're about to release an in‑depth look at the impact of recommender systems and algorithms, which have long been suspected of leading users down rabbit holes of polarizing content and stifling balance debate and discourse. So to the extent that algorithms serve as an engine for discovery and that our data and our preferences serve as the fuel, we will need more algorithmic transparency, too, to understand the path technology companies may be leading us down.
And in an online safety space, we are also seeing more jurisdictions joining Australians setting up their own laws to protect citizens online. So eSafety has come together with regulators from the UK, including Owen from Ofcom and in the room there, Ireland and Fiji to form a new Global Online Safety Regulators Network which we anticipate will soon swell in numbers.
Now, this growing momentum gives me a great deal of optimism for the future because if we truly hope to fix these fundamental global problems, we all need to join forces in a concerted and coordinated effort. I hope that gives you a basic overview and a different perspective of how we're approaching online harms, including misinformation and disinformation from a different perspective and vantage point.
>> MOLLY LESHER: Julie, that was great. Thank you so much. I took a lot away from that, one, the need to develop critical thinking skills, which is something we're working on here, worries about extended reality, mental health online, which we're also working on, and kind of recalling what Sander said, the need to embed safety up front to inoculate in advance that this is all kind of linked together, I think. So thank you very much. We'll take questions at the end.
Now I'd like to give the floor to Rehobot who is a professional fact checker based in Addis Ababa. She is also a consultant and a fact‑checking trainer on countering disinformation. And I was wondering, and I know you're in the room, which is great, from the perspective of a very seasoned fact checker who fights against untruths online daily, how important is fact checking given what Sander mentioned in the beginning that we can't fact check everything, there's just too much of this content? What do you see as kind of the best modalities in its limitations, particularly for non‑Anglophone countries in combatting untruths online?
>> REHOBOT AYALEW: Okay. Thank you, Molly. Can you hear me? Okay. Thank you. So to begin with, fact checking is not questionable because we all know that in this era, as the access of Internet is increasing, the spread of disinformation is also increasing, hand in hand. So the importance of fact checkers who debunk those issues that are circulating online is really important. But despite that importance and despite how much the impact of fact checking is online, there are a lot of limitations and challenges that fact checkers are facing all around the world.
So I can say the major challenge that fact checkers face is the lack of access to information, especially when we talk about Africa specifically. The lack of timely and credible information is the major cause of the disinformation by itself and also makes the fact checking work really difficult. The lack of awareness and low literacy rate of the public is also the other challenge because when we talk about disinformation, when you talk about untruths online, most people are not aware about it, and especially in Egypt and Africa as well. So the lack of awareness about the problem itself is really challenging because if you don't know the problem is there, we can't come up with a long‑term solution.
So when we talk about low media literacy, it's really critical because especially in developing countries like Ethiopia, most people think that Facebook is the Internet, in general, and anything that comes from Facebook and the Internet is true. So changing the mindset and the way that people perceive those social media platforms is I can say the major task we have.
So in addition to that, even though fact checkers are trying to do their best to counter the disinformation are working their best, their visibility and their reach is really questionable because since most people don't know about the importance of everything, their impact can be also questionable, and it's really challenging because we are working to monitor every conversation online. We are working to debunk most of the violent and toxic conversation, but reach and visibility is really low, which makes our impact really questionable.
This also leads us to the major point I want to talk about today, which is the lack of attention from those big‑tech companies and the social media platforms. When I say lack of attention, I mean a location of resource, inclusivity, transparency about their efforts or how many moderators they have for each country or willingness to collaborate with local.
For example, we all remember the issue about Facebook. Francis Hogan. I think I pronounced her name right. We all remember how Facebook was being criticized for its negligence and its failure to moderate content, which are violence‑inciting and are false information in Ethiopia, and that failure to moderate this content really participate in fueling the conflict on the ground, especially in the war and other conflicts.
That's an example of how those platforms and those big tech companies are negligent of ‑‑ those developing countries are locating more resources.
So the negligence, there is deep geographic and Linguistic inequality. For example, there are more than 80 languages in Ethiopia and also there are five major languages that are spoken in Ethiopia. But we even don't know how many of the languages are Facebook moderating or how many people have Facebook to moderate those contents from ‑‑ for, like, 7 billion Facebook users in Ethiopia, we don't know how many content moderators there are. And there are only two fact‑checking organizations that work independently for Facebook who has only five fact checkers. So five fact checkers for around 7 billion social media users. And not only that, the conversation on Facebook and social media doesn't stay there. It impacts the situation and the conflicts and everything on the ground. So it's really questionable.
So when we talk about the inclusivity of language, only a few percent of Facebook users are believed to be English speakers. But more than 85% of its misinformation resource goes to English language. So only, like, 15 or 13% of the resources are going to the other language international, so this also shows how developing countries and how at‑risk countries are neglected as well.
I believe the other challenge fact checkers not only in Ethiopia but also in Africa, or in general, are facing is the question of sustainability because as I say, the access to information and those negligence from the platforms are making our job hard, but at the same time we don't know when we are going to shut down or something because we don't have the resource to continue our work. We rely on funding and grants and donations from other organizations. So the sustainability of those fact checkers and small start‑ups and fact‑checking initiatives, especially the local ones, this one is really questionable.
So since there is also shortage in resource, in expertise, and also in support, we are all facing the problem that even though fact checking is important to solve the problem or to counter disinformation, we are still in luck. Yeah. I'm trying to use my time. So I think I'm running past.
For example, there are a lot of Civil Society organizations. There are a lot of media houses, media development organizations, even the platforms themselves. And governments are trying to work on countering disinformation on their own and not working together. This also creates a big gap in resource allocation and also to come up with a long‑term solution for this worldwide problem. So in general, just every stakeholder is working by their own, and every resource is being handled by their own. So lack of collaboration and lack of support from each other is also one of the reasons that fact checking is facing a lot of challenges.
Last but not least, I want to talk about the most neglected topic about fact checking, which is the mental health issue and wellbeing of the fact checkers because as fact checkers, we face a lot of content that normally we wouldn't have, if we are not a fact checker, there are a lot of toxic conversations. There are a lot of violent content and also there are a lot of lie and everything, and we don't have especially in developing countries like Ethiopia, we don't have the access to support on how to handle those contents and how to keep our mental health safe. So it's one of the untalked‑about ‑‑ untalked story of fact checkers, but also it's one of the most critical challenges that we are facing individually and in general as fact checkers.
In a nutshell, I just want to say that we are all trying to solve this worldwide problem, and the lack of collaboration between each stakeholder is one of the major gaps we have to fill. And I think we can am coup with a better solution when we work together. So, thank you.
>> MOLLY LESHER: Thank you.
[ Applause ]
Thank you so much for that. That was really helpful. I think you really well highlighted the challenges you face, fact checkers for 7 billion social media users, issues around lack of resources which are real, raising this new issue about mental health of fact checkers in developing countries, which is something I hadn't thought about. As well it's this issue of fact checking and non‑Anglophone languages, and I think that's a great sort of segue to our next speaker, which is Pablo Fernandez, who is Executive Director and Editor in Chief at Chequeado. Pablo is also a Professor at the University of bueno Aries and his research team on technology and media and many other important roles. And I know you've gotten up very early to be with us, Pablo. I'm very grateful. Thank you for that. And we'll look forward to hearing a bit from you about the Chequeado AI tool that facilitates fact checking in Spanish, and it would also be great to have a little bit of your views about the right balance between human intervention and digital technologies in the fight against untruths online.
>> PABLO FERNANDEZ: Yeah. Thanks, Molly. Can you hear me properly? Perfect. I would share something but really simple. Let me see if it works. Yeah. Can you see it? Yeah, perfect. So first of all, thanks for having me. Thanks for having Chequeado. For us it's really important to be here because as other panelists have said, things are different in the south, in developing countries. And if we help challenge in developed countries, imagine what we have in developing countries. We are not only talking about the problems of how to solve this information, but sometimes we have problems accessing databases, getting replies from the government. So even getting to the false or truth ratings is really complicated in some countries.
Luckily that is not the case in Argentina right now. But it used to be. We have ‑‑ when we started, and that's why I'm showing this slide, in 2010, we started. And at that moment our statistic institute was being tweaked by the government. So we weren't able to trust the inflation rate. That's when we started. A part of our methodology was embedded having alternative sources in every fact check, for example. And that is something that we brought afterwards to the whole Latin American region, just in case we are the first nonprofit fact‑checking organization in Latin America, in the southern hemisphere. We have been working since 2010 on this.
What we want to highlight is that we have a multidimensional approach. And now we are getting to technology. But the important thing is that we don't think that there is a silver bullet. We think that the solution comes from different haptics. One is media, fact checking, but also we are focused on education. We talk about this in the panel before. We have been working with this in high school but also in universities. Then we work measuring impact, trying to see what is important about what we do. That's why we were really paying to what Sander said about prebunking.
That is something we are going to test in the near future also. And that is the core of this talk.
We work a lot with innovation and technology. And why? Because as Sander said, usually fact checks take a lot to be written, yes. Sometimes a lie starts spreading, and then you have to wait two, four, five days or even a week. So what we built is Chequeabot, an AI tool that helps us to find what to check fast in Spanish. I know we are doing some development also in Portuguese. Why this is important is because, as you know, the lie or the false claim is spread really fast. So we need to be faster. And how can we be faster if we have limited resources? As my fellow panelist says. So we tried to work in two dimensions. One is in a network. This is LATAM Chequea that was very important, for example, for covering COVID and the anti‑vaccination movement. I was in Kenya a couple of weeks ago, because in Africa, they have the Africa Fact Summit and it's amazing. So everyone in this community of fact checkers are working in communities because sometimes the problems are really similar around different countries. And the other dimension is this. Chequeabot. Now there are seven countries in our region using these tools that help us find claims really fast in Argentina, just an example, we are able to find claims in more than 30 media outlets, presidential speeches, Congress speeches, YouTube channels, Twitter in seconds, in seconds. So that was something that in the past was just a dream. We thought five years ago that it was important to be paying attention all the time to four media outlets, and now we are bringing attention to more than 30 outlets. That is one thing.
Then we are able to reply faster through a hub that is huge in Latin America. We are able to monitor not only every social network but seeing only one dashboard, Twitter trends, Google trends. So everything from one point of view. And that, for us, is really important. We are also able to reply in realtime to people. And one thing that is also key is that we started developing this with only one developer. So that's why I want to talk about this just for ten seconds. This is something that can be done even in the global south. You can develop technology. You don't need a team of 50 people. It's something that, with the right people, with passion, and with the right knowledge, they can build these kind of tools. Yes, we were lucky to have our developer that works with natural language processing and then with AI. So ‑‑ but afterwards you need to pay attention to what the newsroom needs, what the fact checker needs. And that is also part of our work. We don't build technology for technology itself, but we build technology that we think and we ask the user if it's useful or not.
So in a nutshell, just to wrap it up, I want to highlight that Chequeabot is being developed in the global south. In the beginning it was a team of one. That was at the same time paying attention to the website to not break. So this is something, again, that can be done. There is a lot of challenges about language. And even with the Spanish being a language that is taught by millions, there's a lot less tools and libraries and technology done in English. So we really need to share the knowledge about this. And if everyone ‑‑ if anyone in the room wants to ask me something afterwards, you can write to me.
And also we need to be able to see what our colleagues need because in the region, it's not the same what someone in Cuba needs, for example, that someone in Chile. So we need to be able to tweak the technology for them at the same time that we keep a similar core, so as no the to rebuild everything all the time.
So in a nutshell, for finishing, we think that we need a multidimensional approach for this. So, again, journalism fact checking, education, technology, and also working with the academy to measure impact of this and be all the time really finding what tools to use. So thanks for this, Molly.
>> MOLLY LESHER: Thank you so much, Pablo. That was great, that reflects a lot of the thinking we have at the OECD. Also inspirational with the right people and technology, we can make a difference and get it done. And I think that's true, and I share that. I'd like to move to our last panelist but certainly not the least, who is Mark Uhrbach, who is also connecting very early in the morning but from Canada. So thank you, Mark, for being with us. Mark is a friend of the OECD for a long time. He's head of the Center of Expertise for the Digital Economy at Statistics Canada. He has an awful lot of experience with innovative approaches for measuring various aspects of digital transformation including wellbeing, among other things. And, Mark, I was hoping you could tell us a little bit about what Statistics Canada's efforts are to measure false and misleading content online and what we need to do to fill the measurement gaps.
>> MARK UHRBACH: Great. Well, thank you very much, Molly, and thank you for having me here today. It's been a real pleasure to hear the other speakers and all those different perspectives as well. So I think kind of in your opening remarks, we talked a little bit about the fact that we ‑‑ getting any kind of indicators on this area is a real challenge, and this is identified as well in the OECD toolkit note, kind of the who, what, where, when, and how this information is kind of spreading is really challenging to get good metrics on.
So I'll talk a little bit about kind of what Stat Can has done and where we might go in the future with this. But really that prevalence of misinformation and the importance to understanding how it impacts the lives of individuals, what they believe to be true, and how it affects their behaviors is a real challenge that is currently facing statisticians, national statistician organizations, and then policymakers as they seek accurate sources for this type of data in order to make proper decisions.
So at Statistics Canada, misinformation and trust in media are included as one of 16 indicators under the good governance pillar of the quality of life framework that has been established and adopted. However, we still lack an ongoing data source to support this indicator. Early attempts to measure the phenomenon have been difficult due to the challenges in explaining the concept and the information sought to respond to survey questionnaires when we attempt to get it through that sort of mechanism.
Although ad hoc data collection that is relevant to the indicator has been completed, it doesn't align directly with the definition under the Quality of Life Framework. So nearly everyone ‑‑ and we are having this conversation today ‑‑ I think we recognize that nearly everyone sees misinformation or claims to have, although this remains really difficult to measure since it really is a self‑assessed phenomenon. These early attempts to measure misinformation and the consumption of it have been challenging at best because while we can accept that everyone sees misinformation, only some may recognize it, and then others may judge accurate information to be false based on their own values or beliefs.
So this set of circumstances makes questionnaire design very difficult and potentially misleading.
So as a result, Statistics Canada has chosen to focus survey work to date on the activities that individuals take before sharing information with friends and family and the steps that they take to verify the information found online rather than looking at just the prevalence of misinformation or whether individuals have consumed it.
So I'll talk about one very quickly here about one study that we did during the height of the COVID‑19 pandemic. So in the summer of 2020, Statistics Canada undertook a new questionnaire as part of a panel survey program to better understand the degree to which Canadians were fact checking news that they found online, how they verified the accuracy of that information, and whether they did that before they shared it online in other means with other friends or family.
So this module asked only about information related to COVID‑19. And we found that as a result, it was easier to have as a more focused set of questions than were attempted previously when we ‑‑ rather than attempting to ask about kind of misinformation writ large by focusing it on one topic, we were able to design a module that made more sense to the respondent.
So this, along with a short recall period, seemed to allow for more accurate reporting by respondents. And the results of this study were quite significant. What we saw was that while nearly every respondent said that they had seen misinformation online, a much smaller percentage said that they always checked the accuracy of the information that they find online. So only about one in five Canadians mentioned that they ‑‑ when they find information online, they always go and kind of make sure what they've seen makes sense.
Others were much less vigorous in always checking the accuracy of the information that they were finding. The most common reason identified by the 6% of Canadians who never verified the accuracy of the information was that they trusted the source of the information. Of the other reasons, 22% reported that they did not even think about checking the accuracy of the information. 20% didn't care about checking. 11% said they did not know how to check it. And 10% just said that they didn't have the time to be checking it.
So even if they chose to share this information, half of Canadians chose to share it with their friends and family networks without knowing whether it was accurate. And we saw that with older Canadians, those over 55, this group was the most likely to share information that they found online about COVID‑19 without taking a step to check the accuracy of the statements before they did so.
Although the practices of verifying information online varied by education level and age, we saw that the results were very similar by gender. So despite this exercise demonstrating that surveys can be a valuable tool for measuring some practices and behaviors related to misinformation, it also demonstrated, again, that surveys alone are not ‑‑ not an ideal solution for this type of measurement. Survey programs remain limited in their ability to determine the ability of individuals to spot misinformation, and they are not an effective tool in assessing the prevalence of misinformation that individuals are exposed to on a daily basis.
So to this end, national statistical organizations must continue to explore new and novel means of fully capturing the phenomenon. As Statistics Canada, there has been some exploration with the use of an app to capture information on the wellbeing of individuals, whereby they are able to capture their mood when prompted with an app on their phone that they voluntarily download. So one thing that we would like to look into is the usefulness of this type of application for data collection related to misinformation as well.
This type of nearly realtime data collection could remove many of the issues related to the recall period, which diminished the quality of data that is collected. Another method that has been discussed internally as well is the gamification of data collection where respondents would need to attempt to identify misinformation through a type of online game, providing data on the capacity of individuals to sort through different types of information. And this was really interesting to hear Sander put forward some of those ideas this morning and the work that they've done is really great to see as well.
So I think while we can agree that neither of these offer a silver bullet‑type solution to the development indicators of this type, they really can offer a step in the right direction, and we've gone through this process before with sort of emerging new technology trends and new types of indicators like this where we kind of have a bit of a breaking‑in period where we have to do some experimentation, some exploration. But I think really as practitioners, it's really only through this exchange of lessons learned and these types of experimentation with data collection, that we'll be able to capture more relevant information and produce appropriate indicators for evidence‑based policy‑making related to this important topic. Thank you.
>> MOLLY LESHER: Thanks a lot, Mark. It was great to hear about more of the innovative approaches that Stat Can is thinking about the gamified surveys, thinking that traditional surveys may not be the best approach in this area. I think I share that view. Thank you so much for that. I want to give one last word to my colleague, Worthy Cho, who is online. She is currently a law student at Harvard. She's formerly a data analyst in Meta's misinformation process operations unit, and she worked with us for a little bit this summer on our own measurement quantification exercise. And I just ‑‑ from someone who's been inside, looking at some of this stuff on a major platform, if you just had a couple thoughts, reacting to some of the panelists' presentations before then, we don't have too much more time, but then we'll close if there's any questions on site. So, Worthy, great to hear from you now.
>> Worthy cho: Thank you so much for having me, Molly. It's been really wonderful to be here and hear all these different perspectives and ideas about this issue. I just wanted to start off by saying that I really appreciated you starting off this conversation and framing it as, like, the reason why this is so important is because it's about fundamental rights. And I think in the United States we particularly have been having a lot of conversations about how do you balance between preventing the harms of untruths online while protecting speech. And I think recognizing that by, I think, allowing this kind of harm to proliferate is infringing on fundamental rights as well. So I think that really resonated with me.
As I took a few notes, just as we were going along. And so I think one thing I wanted to say is, again, with coming up with different regulatory and policy approaches, to dealing with this issue, I know in the U.S., for example, we've been a bit slow in doing so. And so in Sander's work, it was really interesting to think about what are other ways we can, you know, help people become better at identifying untruths online as, I think, regulators and policymakers, I think, continue to think about how do we address this issue.
I think speaking ‑‑ hearing from the fact checkers was also really interesting and really brought me back to my time at Meta where I had the opportunity to ‑‑ from my perspective look into some of the work that fact checkers and content moderators were doing to help prevent misinformation on harm. About and I think some of the things you said really resonated with me, I think, in terms of how we allocate resources in a way that's not just purely focused on, like, English‑speaking countries or, like, totally forgetting to devote enough resources to other parts of the world that are similarly dealing with this issue, particularly when you think about the fact that in some parts of the world places like Facebook and other social media companies are really a big source for where people go to get their news online. So I think it's really important for social media companies to really factor that in when they're thinking about how they allocate resources. I think that's definitely one of many places where social media companies could really improve.
I also think that, you know, as ‑‑ although I'm in law school, I was an economics major before this, and I love data. And so I really do love to hear about the ways in which regulators, you know, multinational organizations are thinking about how data can be really leveraged and a really effective way to help tailor and target solutions to dealing with misinformation online. And so I'm really excited to see that that work is ongoing now and really looking forward to seeing how that work develops and grows, you know, as time progresses.
>> MOLLY LESHER: Thank you so much, Worthy. It's great to always have your insights, as always. So Hanna, I'm going to pass to you on site and see if there's any questions in the room. I know we've hit time now. It's probably my bad as a moderator, but there was just so much exciting stuff, we didn't want to cut off. So Hanna, I'm going to pass to you and see if there's any questions in the room.
>> HANNA PAWELEC: As before, please raise your hand if you have a question. Yes, please.
>> My question goes to the presenter. As users, for us to use your fact checking, first of all, we have to know about your existence. And today I don't know that such fact checkers exist. So what do you do in order to increase your visibility? I mean, have you, for example, appeared on mainstream media? All I'm saying is how do you know your existence? Thank you.
>> HANNA PAWELEC: Okay. Thank you.
>> When we talk about fact‑checking organizations, there are only two fact‑checking organizations in Ethiopia. There are two other international fact‑checking organizations working in Ethiopia. So the local fact‑checking organizations are called Ethiopia Check and HaqCheck. So I was working in HaqCheck for the past two years, and we tried to reach as many people as possible on social media, also on mainstream media. For example, we started a television show, a weekly television show, that covers the most important issues in the week and the most fact‑checked issues in the week. So that we can create awareness for the public, and we can reach more people on the mainstream media. So there are efforts to reach the public and more people, but as I said, there is a gap in resources. So it's really hard to manage everything with the number of people. I mean, there are only four people in the organization working on the fact checking, the publishing, the social media, monitoring, and also the hosting the show and everything. So the lack of shortage is what's keeping us from reaching other people as well. So, yeah, there are efforts.
>> HANNA PAWELEC: Thank you. And we have our next question.
>> Thank you. My name is Grace. And my question goes to Julie. She mentioned the society coming in, in fighting the online untruths. And so my question is what is the role of the society affecting the online truths? What relation does society do in fighting the online untruths?
Thank you.
>> JULIE INMAN GRANT: That is the major existential question. I mean, I think as tried as it is and it's been said many times, I think we all have responsibility to maintain the integrity of discourse, the civility of discourse, the relative truth of discourse. So, you know, there will always be people, you know, who will spread malicious disinformation. There may be people who unwittingly share misinformation for lack of education. But I think as we see it, we need to call it out, and I think if there are more people who are serving as virtual moderators across society and are demanding integrity and demanding truth, that is the role that society needs to play. And I think we need to decide, you know, what it is we're using the Internet and these technologies for. Is it to bolster society to supplement the good and to harness the benefits, whilst minimizing the risks? I think most people could agree with that.
>> HANNA PAWELEC: Thank you very much. And we are now five minutes past end time. So I will thank again all the speakers and all the participants that came here for all the questions and your participation. And thank you and have a good rest of the day.
[ Applause ]
>> PABLO FERNANDEZ: Thank you.
>> MOLLY LESHER: Thank you to all the speakers very, very much. You contributed to making the session a great success. And I'll be in touch. And I wish you a great rest of the day, evening, morning, wherever you are.
>> MARK UHRBACH: Thank you, Molly.
>> PABLO FERNANDEZ: Thanks, everyone.
>> Worthy cho: Thank you.
>> PABLO FERNANDEZ: Bye‑bye.