The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> PEACE OLIVER AMUGE: Otherwise, thank you very much. My name is Peace Oliver Amuge. I work as the African regional lead and the outgoing member. I'm very pleased to moderate this session. The audio, it's okay? Okay.
I will just briefly introduce the session to you. This session is a collaborative session. It's on Tackling Disinformation in Electoral Contexts. Channel 4, please. We are on channel 4. Channel 4. If you have just gotten in, get yourself the gadget.
Yes, I just say that this session is tackling disinformation in the electoral context. As you are all aware that this year has been called the year of elections. We've had several countries going through elections, and we know that during elections, human rights is at stake, and we've had growing issues of disinformation during elections. These are issues that put human rights at stake, democracy at stake. So this is a very important and crucial discussion to have. So we are very happy to have you join here. We have distinguished panelists that you are seeing here. We should have one panelist joining online. So this session we discuss a couple of issues. We'll discuss the role of different stakeholders, the role of social media. We will discuss the norms, the principles, the frameworks, the standards, some of the ways that we can take to counter disinformation, and we will be sharing different contexts when it comes to disinformation. I can see the room is quite diverse, so I think we will have a wealth of discussions.
I want to just mention that we have an online Moderator who is here in the room in Manuel. We have rapporteurs who will be supporting us. We have Michelle from Zambia. We have Umut who will support us with rapporteuring, and big thanks to the organizers who are in the room and some might not be here from the Asia Pacific IGF, the Bangladesh, the Caribbean IGF, the Colombia, Euro DIG, Gambia, and several other, South Sudan. I will not mention them all because we lost a little bit of time at the beginning. I will come already to say that the panelists who are here, one who might join, we have Aiesha, who will introduce herself later. We have Giovanni and Nazar in here and Juliano. Thank you very much for making time. I think that we will already start our conversation.
I am keeping fingers crossed that we don't have any technical glitches. Please just give me a note if you can't hear me or you are having trouble. If anyone walks in and sits near you, you let them know we are on channel 4, and, tech, please let them know that we are on channel 4.
Since Poncelet is not on yet, I will come to you, Giovanni, to open our discussions. The question to you is, how have existing regulations addressed disinformation during elections? What are the practices that balances combatting false information with protecting freedom of expression?
>> GIOVANNI ZAGNI: Now it's on. Now it works. Okay. Thank you for this question. Good afternoon. I will answer by making a reference to what I know more, which is the European Union case, which is peculiar in many ways.
First of all, the European Union is not the place where the majority of very large online platforms are based, which is clearly the U.S., but the E.U. at the same time always has taken a very proactive approach when it comes to regulation. Comments say in this area is that the U.S. innovates while the E.U. regulates.
Secondly, 2024 was the year that about 1,000 European states went to the polls for a variety of national actions from Spain to Finland and from Greece to France, but also when a European‑wide common election, so to say, took place for a new European parliament, the only elected body of the European Union. Only directed elected body of the European Union.
Thirdly, in 2024 new important E.U. regulations, like the Digital Services Act, DSA, and Digital Marketing Act, DMA, were not yet fully enforced because even if they have been approved by the relevant institutions, the process of implementing them is still ongoing. So they were not able to impact the electoral processes that took place this year.
So how did the E.U. address the issue of disinformation in the crucial electoral year 2024? The main tool was the strength and code of practice on disinformation, which was promoted by the European Union. The code was presented in June 2022, and it is a voluntary and co‑regulatory instrument developed and signed by 34 signatories at the time of the adoption. . Who are the signatories? Players from the systems, advertisers, ad tech companies, fact checkers. Many, but not all, very large online platforms, civil society, and third-party organizations.
A few names, Meta, Microsoft, the European Fact Checking Standards Network, the European Association of Communication Agencies, Reporters Without Borders, World Federation of Advertisers, TikTok, Twitch, and V‑mail.
So all these signatories agreed to form a permanent task force with the idea of ensuring that the code adapts and evolves in view of technological legislative, societal, and market developments.
The task force is a place where representatives from the different stakeholders have a place to exchange information, require specific action, and discuss the best way ahead. The code, therefore, is the key instrument of the European Union's policy against disinformation, and its two key characteristics are to be voluntary and co‑regulatory instrument. So coming back to your question, the second half of it is how do you balance that with protecting freedom of information? The European way, so to say, is to have all the relevant stakeholders around the same table and do not to impose any kind of direct intervention from the authorities on the specific content, but more to have a forum where, I don't know, like potentially damaging cases or potential threats or things that need to be looked after are discussed and then the platforms decide to take action or not. I'll give you a very practical example to conclude.
The recent elections in Romania that took place a few days ago made headlines in Europe and beyond because under the suspicion of foreign interference, they were annulled by the Romanian Constitutional Court, and the first round of the elections has to be redone. So during this process basically all the stakeholders that were involved in the code decided to set up a rapid response system. What that meant, there was a mechanism through which all the, I don't know, like fact checker or civil society organization could say, look, in our day‑to‑day work we noticed that this particular suspicion activities happened in this particular social network platform. So now it's up to you, my dear social network platform, to check if particular phenomenon violates the terms of use. As you can say and see, there is no direct intervention or no kind of regulation or law by which you have to do something yet, but there is this co‑regulatory and collective effort to work together as stakeholders involved.
Thank you.
>> PEACE OLIVER AMUGE: Thank you very much, Giovanni, for those very informative regulations that you mentioned and the collective actions that you are taking. I think it's very key to have these frameworks in place when we talk about disinformation. I've been told that we have Poncelet in the room, and I would like us to hear from Poncelet. As we all know, sometimes tech can be difficult, so it would be nice to hear from Poncelet. I also wonder why we don't have Poncelet on the screen. Tech, if you could let us have Poncelet, and Poncelet, if you can hear us, would you please just say something? Poncelet, are you able to hear us? No, okay. Okay.
I think our online Moderator is trying to sort out that. Then I will come to you, Aiesha. We've had what Giovanni ‑‑ Poncelet, can you just open your mic and say something?
>> PONCELET ILELEJI: Yes, thank you very much, Peace. Sorry. I was waiting to be granted access. Can you hear me?
>> PEACE OLIVER AMUGE: Yes, we can hear you. Are you able to speak so we can hear from you or listen to Aiesha?
>> PONCELET ILELEJI: Yes, yes. You can definitely hear from me. Thank you all. One of the most ‑‑
>> PEACE OLIVER AMUGE: Poncelet, this is the question that I would like you to take. Share with us what the role ‑‑ what the role that traditional media plays and social media in election ‑‑ during elections and how this is effective and how regulations have been used to address these issues of disinformation.
>> PONCELET ILELEJI: I think, speaking from sub‑Saharan point of view, you will notice that the role of social media in terms of misinformation is also important. We have to know why has this become very important? Most young people, most political parties, what they use all around the world today to disseminate nation has been through social media. Whether it's Twitter, whether it's TikTok, whether it's X, they have used all these to disseminate information.
Most people don't naturally use mainstream media. They use social media in the way in which they get information. One way, in a good example, we can use to combat this, is making sure that within countries we have what we ‑‑ we have what are called fact‑checking websites. Like what we did in Gambia coming into our last presidential elections we worked with the Gambian Press Union to set up what we call a fact checking website that was supported by UNESCO. UNESCO has always been a good agency and supporting a number of countries in setting up fact checking websites. It's important that you have to have important type approach in training journalists at grassroots level. Especially journalists working at community level, using community radio, on how they can work with the fact‑checking websites to be able to do this.
Unfortunately, I don't know why my camera is not coming on. It's shown here.
>> PEACE OLIVER AMUGE: When you talked about the fact‑checking, how that is important amidst election times and when there is widespread disinformation. Also, you mentioned how social media is an important tool that people use to access information and also it's the same tool that is used to spread disinformation. I will still ‑‑ we will pack that, and I would like to hear ‑‑ yes, we can all see you in the room, Poncelet. Aiesha, we'll come to you. How can programs, initiatives and values help people identify misinformation, engage diverse communities, and ensure the electoral integrity?
>> AIESHA ADNAN: Hello, everyone. Great to be here and coming very far from the Maldives. This is an interesting topic because I come from an organization called Women in Tech Maldives where we initially started to support women and girls. Then we realized we need to talk about safety. When you talk about safety, you have to talk about everyone. That is where actually our work began on this space, disinformation.
Then we had the opportunity to actually communicate with a lot of organizations. One of the areas we work is on identifying disinformation and social media analysis. Coming to the election one of the interesting things that highlighted of the last presidential election in the Maldives.
Traditionally it has been we had several election observers group. There was nothing like disinformation in most of the reports, but this time it was quite a bit different because we saw that major percentage of the disinformation came from the online media and then very few were from the traditional media. We know that this shift is happening in few years time. We don't see much in the traditional media. It goes through the social media.
Then you mentioned about what are the initiatives? When you talk about the initiatives, I know that there are a lot of tools available and can it really fit all the countries? No. It has to be designed in a way that it fits the cultural norms (inaudible).
>> PEACE OLIVER AMUGE: We're having a cut in the ‑‑
>> AIESHA ADNAN: Sorry about that. I hope you've heard some of my words. Okay? All right. So when we talk about misinformation and what comes to my mind is, like, what are the ways that we can really tackle it? Right now we are coming up with a lot of tools actually to debunk the disinformation and everything that you see, but I would like to see a place where we actually build a culture where we promote information integrated across everyone. When we especially talk about election, then everyone says, it's the media, it's the media spreading the information, but ‑‑ yes, of course, some part is the media, but it's the citizens that believe in it. If they are not equipped in ways with knowledge and tools on how they can really identify them and then that's where we fail because it's not only the election. It's everywhere. The information integrity is an important factor to consider. As an organization we have conducted several sessions with the political parties as well and with the parliamentarians as well. How can they actually support these kind of processes within the communities because in Maldives we have remote islands as well. Then the council and the Parliamentarians, they do travel across. That is the way we can connect with the communities and run more programs.
In light of this I also want to highlight there are two interesting guides from NDI. NDI has a very interesting guide to promote information integrity, and this guide has a lot of tools. Like the tools they have supported to develop fact‑checking and other frame works as well. Another one I would like to highlight is UNDP's strategic guide disinformation and integrity framework. That's another one as well.
When we go back and talk about these kind of initiatives that are supported, I would like to highlight one of the initiatives that NDA took. One is in Georgia. They partnered with local organizations to address the spread of misinformation during elections and the impact. The effort of private citizens to make informed decisions and reduce effectiveness of misinformation aimed at influencing public opinion and bad behavior. We know these kind of innovations actually help.
One is definitely going for fact‑checking tools and empowering citizens to make the right choices. I would also like to highlight that in Maldives we are currently working to develop with a community to develop fact‑checking systems so that hopefully this will be a way that smaller countries like us where we ‑‑ we speak one language, so most of the time what happens is when these kind of information is posted online in our native language, you know the algorithms cannot pick it up. I hope that all these platforms, they do consider us because we also need to exist, and we need that support from everyone.
At the national level we are doing our work, but it is labor‑intensive. So that support is frequent. Thank you.
>> PEACE OLIVER AMUGE: Thank you very much, Aiesha, for pointing out the gaps and the needs for capacity building. When we talk about countering disinformation, we are in a multi‑stakeholder space, and I think it would be nice to talk about that a little bit. I would come to you, Juliano. How can multi‑stakeholder partnerships and public/private collaborations improve efforts to combat election disinformation and expand media literacy programs to reach all parts of the society?
>> JULIANO CAPPI: Thank you so much. Well, I decided to bring a reflection here based on fiction. In a novel from 1984 he states, Who controls the past, controls the future. Who controls the present, controls the past. While he builds a dystopian reality they revise every piece of stored information and rewrites it if necessary to conform to the party's vision. He reminds us of the sets of institutions, disciplines and propositions built throughout human history to organize this course. More importantly, he sheds light on the social discourse to impose as a strategy from maintaining or gaining power. It's at this point that we have to recognize that internet has brought the challenge of recognizing speech to an entirely new level.
This is a challenge for society as a whole. In the sense I understand that most stakeholders spaces are especially important to foster social arrangements capable of dismantling a highly developed industry, the disinformation industry. We have been working on the production of principles and guidelines to combat disinformation.
I guess what happened in 2018 it was a trigger to promote debate at international level on disinformation. Despite that in 2017 the United Nations signs a declaration of freedom of expression, fake news and disinformation. In the same year the European Union sponsored a first major study on disinformation, information disorder towards interdisciplinary framework to research and policy‑making.
One year after that, 2018, the European Union creates the high level group on fake news and disinformation online to produce one important first report called a multi‑dimensional approach to misinformation. This is kind of important. I ask you please to go to the next slide because it is in 2018 that the Brazilian Internet Steering Committee create the Internet and Democracy Working Group, which produced a first publication, the fake news and elections guidebook for internet users.
The guide outlined the general problem and presents concepts, discussed risks and offers guidelines to prevent the distribution of disinformation. In 2018 we have the election in Brazil. The working group carries on working on the challenges imposed by disinformation and produces the report on disinformation and democracy discussing international initiatives to combat disinformation in the industry and proposing 15 directives to address the phenomenon while promoting what we now call information integrity.
In 2021 the working group presented another work, which is the contributions to combatting disinformation on the internet during electoral periods, looking specifically to electoral periods. Additionally, CGI.br participates to combat disinformation. All this work that you can see here on the presentation are available on the internet, and despite it is in Portuguese, we know they can easily translate a PDF to any language through internet applications.
Also, if any country or working group is interested, we could translate this work, but we should consider that they tried to address a specific reality, which is what happens in Brazil.
Well, still, my feeling is that we can do more. We should ask ourselves about the impact of the work carried out in most stakeholder spaces. This includes obviously IGF to combat disinformation. I believe we may find opportunities to improve processes and foster intersectional collaboration to better integrate this different forum. We carry out work every year and lots of people go to the IGF, and then this is time for us to recognize that we have to think of how to recognize this forum and considering that in some measure we fail as a society to combat disinformation.
Thank you.
>> PEACE OLIVER AMUGE: Thank you very much for that. I think you share the need for research and harmonizing strategies and efforts and embracing collaborations. We are about to open up to hear from you, but let's just hear from Nazar. Nazar, we would just like to hear from you what should digital platforms and tech companies ‑‑ what role should they play in reducing disinformation during elections and how can regulations ensure that they are held accountable? Thank you.
>> NAZAR NICHOLAS KIRAMA: Thank you for posing this question. Can you hear me? Okay. Thank you so much for organizing this. My name is Dr. Nazar Nicholas Kirama from Tanzania. Why are we discussing misinformation in the electoral context? It is because the elections have an enormous power to put people in power or to disempower candidates. It is a fertile ground where the misinformation, disinformation, fake news is attracted. It is like a sort of, you know, space where a lot of activities happened in a short period of time during the campaign. If you look, for example, the elections that we had in the United States this year. There was a lot of information ‑‑ misinformation, disinformation, and fake news.
I wouldn't want to comment much on that, but I think this led to one of the candidates being sort of, you know ‑‑ their campaign was sunk because of misinformation. What do they need to do to ensure that the electoral processes are free of all this misinformation and fake news. There are several areas that the platforms and the tech companies need to either invest or do more. Number one is actually being proactive in terms of content moderation they need to implement advances algorithms that will actually flag and detect, flag any misleading information so that people who are going to elect a candidate so that people can understand this is misinformation from the campaign which has been put out there.
I would like to say that the tech companies and platforms have sort of in terms of the electoral processes, they have become sort of electoral commissions without regulation and without anybody to sort of answer to. They are out there. The campaigns or the countries that have been affected have actually little or nothing to be able to make the platforms and the tech companies. The content that has been posted by other proxies or the campaigns or the bad actors for a certain campaign. So I think the regulation in terms of what they do is very important.
Number two, is the transparency in algorithm policies. These tech companies and platforms they need to be transparent in the algorithms that they use so that information is clear and out there, and make sure that the sort of misinformation and other content is put away for the candidates, for example.
Number three is collaboration with fact checkers. They need to collaborate with fact checkers. The platform showed the collaboration with fact‑checking organization to verify accuracy of content and label the first information. This partnership shows objectivity and credibility.
The tech platform, because they have the ability to make things go viral. They have the ability to reach thousands and millions of people around the world and within the country. It is important through the transparency and the partnership and the collaboration with fact checkers will be unable to do all the defects and fake news and disinformation, and make sure that. That way you know the right candidate is being put out there. That is very critical. Number four is about having public awareness campaigns that are within the platforms themselves, within the country and the actors, the politicians, the regular consumer on the tech and the media platforms. Awareness is very key. Even the ordinary citizens need to know when should I believe this information is true about a certain candidate, for example. So that is very important.
There has to be some kind of realtime safeguards during the electoral periods. For the tech platforms they have to collaborate with ‑‑ even with the electoral commissions to ensure that this kind of safeguards are out there, and they are put up and make sure the information that is there is key. We have measures to ensure accountability. The platforms and tech companies have got to be accountable also for the content that is posted on their platforms online. This is because sometimes the tech companies and platforms tend to hide behind the veil of freedom of expression and all of that. Now the freedom of expression does not exclude them from accountability. They have to be accountable in some ways for the content that is posted, and which is false.
I think that is very key in terms of making sure, you know, deep fakes and misinformation, disinformation is rooted out of electoral processes and in the end the citizens enjoy the right of electing the right kind of candidates not because of the misinformation or the information from the bad actors across the candidate, but because of the actual policies that candidate put out there for the common or ordinary citizens to be able to consume because this disinformation, misinformation, and deep fakes, have the ability to actually disenfranchise citizens of a particular country. Now the tech platforms are not accountable for the information. That's so that they can elect a president of a certain country, a judge, for example. In the U.S. I know judges in the United States of America, you know, they get voted for in office. That's why I say at the beginning that these tech platforms take companies and media platforms, electoral commissions without regulation, without guardrails for them to be able to ensure that the content that is delivered on the platforms resonates with what is actually happening on the ground.
With that, I thank you for this.
>> PEACE OLIVER AMUGE: They have very much, Nazar, for your input, and highlighting some of the issues that are happening and also steps that needs to be taken by platform owners and also other stakeholders. For instance, you know, valuing and focusing on accountability, transparency, fact checking awareness questioned. I must say that as someone that works in the African region I've followed elections that have been happening across Africa. We had over 18 countries going through electrics, and disinformation was such a big threat to human rights, and we had, you know, undermining the freedom of expression, civic engagement, having people decide like was just mentioned. People decide on rumors and fake news. I think that's a very important thing.
Also, access to information. We use digital platform to access information, and these were things that are very much undermined during elections. We will open up a bit. Do we have anyone online? If you have a question here, you can ‑‑ yes. One, two, three. Let's just check if anybody online ‑‑ is there anyone? Okay. So you go first and then ‑‑ oh, why he. You need a mic.
>> Thank you so much. I have a question for Giovanni. You spoke about the platform that is more or less saying if the misinformation, disinformation impacted the results U result of election. Are there conclusions binding for the decision makers because decision makers may be interested by their election. They may be a candidate, the government, the sitting government may be also candidate for the new term. Is the conclusion of the platform binding for the decision makers?
>> PEACE OLIVER AMUGE: Thank you. Let's take the three questions. The fourth one here.
>> Okay. Thank you. My name is Nnenna. I listened, and there is a lot of conversation around the right candidate. That sounds like bias towards a particular set of values because I have observed elections for at least the past 15 years, and I've also worked on disinformation and misinformation for a long time. The fact remains that in every election all parties contribute to misinformation. Our personal biases might tend to show more from one side. I say our personal biases because even online the clicks, the people we follow helps tell our things what to bring to our feed. I'm wondering if we are looking at it objectively.
Then, secondly, around platform accountability, I agree with you, platforms should be held accountable, but I'm concerned. For what? We have to be clear what we're asking platforms to be held accountable for. If we decide to start holding platforms accountable for all forms of fake news posted on the platform, it's a round‑about way to stifle free speech. I say this because a platform might have the ability to pull down news that has been verified by fact checkers as fake news. If there's no means of verifying it, it wouldn't be because of my opinion, pull it down. Do you see my point?
Recently X, formerly Twitter, introduced community notes, and anyone who has community notes access will know that people even misuse it, and this is supposed to be the court of public opinion. We have to be very careful. I say this because as a Nigerian different African governments have found ways of trying to regulate and hyper‑regulate social media.
When we open a door, we have to be very direct to where that door is pointed to so that we don't open Pandora's box, and it will be very difficult to shut it down. I agree with you around advance algorithms to flag any misinformation, but it's also very, very important that for all algorithms there is explainability because of cultural context.
There are words that when I say it, it means something else that be when somebody maybe sitting in Italy says it. I can tell someone, oh, you're so silly, you're so foolish, and it's banter, like we call it, but those words form abuse and insult in a different language. Algorithms, while advanced, may not be the best people or the best tools, not people. I do agree with collaborating with fact checkers because this is very important that we have the human in the loop, the person who works on these issues. This is my contribution.
Just saying we should be a little bit more circumspect.
>> PEACE OLIVER AMUGE: Thank you.
>> Two interventions I like to push across, just what the other colleague just said. Platform regulation, I ‑‑ like platform length in meter. I work with them on something similar to that, this on election content, that is more of misinformation or disinformation. So in terms of electoral content, they are doing something in terms of regulating content that when reported that is not fact‑checked and they pull it down based on the fact that this is, yes, something locally that is flagging this particular content that is not acute. The other issue is in terms of electoral context with respect to misinformation, I think that there is a need for consistent and sustainable national civic education because basically you tell your story better than anybody. In a sustainable education nationally we have to shift misinformation or disinformation, please.
>> PEACE OLIVER AMUGE: Thank you. There is a hand behind. See the mic is close by. Let you take it there, and you'll have it last. Thank you for understanding.
>> Thank you for this. Can I move? Can I ‑‑
>> PEACE OLIVER AMUGE: Yes, please.
>> I'm Kosi. It's not normal to say platform will be responsible for my information that I put online. If I put something online that's supposed to be a response on that as opposed to response on that, platform can if government requests information, platform can share my name and my information is shared on its platform. It's very important for everybody to know that because information is freedom. No information I'm sharing is for ‑‑ is to do something. It's for me. It's for ‑‑ that is very important. All the platforms you have now are doing their own regulation process. You will let them do it better. Thank you.
>> Thank you. I'll tell you my favorite joke. What's the difference between conspiracy theory and truth? Six months. Remember current times how much we have heard about COVID‑19 and how much of that turned out to be not factually correct or even incorrect and turned out to be something what we called conspiracy theories and before it turned out to be truth, and here I can ‑‑ I want to highlight the fact that we should be especially very precise on what remark this information or misinformation, especially when you are talking about elections because as it was mentioned before correctly, elections are not about fact checking. They are about political battle and political bias with all the parties directly or indirectly basically are fueling the misinformation narrative since the media landscape just to win. Sometimes they are supported by like an establishing authorities of power because they just want to sit in the chair another four years. That's all this. It's the intent to share all the possible experience we have for finding fakes and disinformation and moreover, to share our truth and platform free. There is even something that finds deep fakes. You upload the video, and it runs up special scenes and video saying it's the possibility of this, like some face being deep faked. It's like 70 percent or 97 percent or whatever.
I would encourage everybody to be precise on what we label fake or not. Political fakes and electoral fakes are most of time a battle of trying to somebody gets more power and not to some truth point. Remember lessons of COVID‑19 where lots of conspiracy theorists and even lines for which some people were prosecuted or fined or even jailed turned out to be absolute truth. Thank you.
>> PEACE OLIVER AMUGE: Let's give the panelists time. Giovanni.
>> GIOVANNI ZAGNI: I'll answer first the man in the second row. How the thing was framed in Europe, I have to be very clear that the occasion when a candidate said something that is false is not something that was addressed by the ‑‑ that is addressed by the task force. Absolutely not. None of these entities and no way did the general way in which the thing is framed in Europe, they have any interest in framing the political debate or in political expression in any sense. All the candidates in Europe can take say basically whatever they want, and there will not be any direct intervention that will be established by this framework.
The things that decode and all the other stakeholders are involved in are things like transparency in funding for political advertising, for example, or flagging cases of possible, you know, coordinated behavior. So, for example, a civil society organization for a fact‑checking organization can bring to the table something that they've noticed as, I don't know, a bot campaign. They think it's a bot campaign or they think there's a specific account that is particularly active in spreading the false news. And then there is no way to oblige the platforms to do anything but it's up to the platform to decide if the specific campaign, that specific instance, that specific behavior violates the terms of use. So this is how things currently stand at the European level.
The second thing that I wanted to mention is the thought about how countries should be regulating social media. My personal opinion and probably it's not that all the members of the panel or all the people in the room is that countries should stay as far away as possible from directly regulating through law anything similar to spread spreading false information on internet in any way. In any way saying something that is false is punishable by law with some exceptions. Like libel or slander or whatever. Generally speaking, freedom of expression has to be the most important value that is.
At the same time, though, I wanted to point out that basically no human platform or way of communication is completely unfiltered or if they are, it doesn't turn very soon into something that nobody wants to be. Nobody sane of mind would want to be in. Of course, we currently in all the countries that I know of who have strong regulations in some parts of. So unabated free speech doesn't exist. In terms of what we should do when it comes to disinformation, my personal idea that something like labeling is probably the best thing to do.
Fact checking in my opinion is not ‑‑ thank you. You're very kind. Helping me out with this. Fact checking in my opinion is not kind of telling ‑‑ giving out cards, like saying the truth and saying what's false, but is more like providing contextual information to the user and like saying, okay, this is what's out there.
You can say that we never went to the moon. That's fine, but keep in mind according to all this list of reputable sources, this is actually doesn't appear to be what really happened. Then it's up to you to make up your mind to evaluate if those sources are fine. Okay. But, still, I think that providing more information is always better than providing no information. This is just my personal opinion. With that, I'll shut up.
>> PEACE OLIVER AMUGE: Thank you very much, Giovanni. Did you want to take some questions? Yes.
>> NAZAR NICHOLAS KIRAMA: Yes, thank you. What are we dealing with here and what our colleague has said has everything to do with it. Disinformation has everything to do with power. You have power. You want to maintain power, and there are those that want to gain power. I would like to address two points that were mentioned here in the panel. First is I couldn't say that there is no bias in platform models or business models considering who has been in the last ten years gaining power in Brazil, in Europe, and in United States and many other countries in the world we can see the groups gaining power. In Congress, in media. I mean, it's not just political power. It's communication power. I guess that we have a relation between the kind of business model that are established in digital platforms and the advancement of some political field. There is bias. It's biased. You cannot just imagine that there is no biases.
This is important because we can try to address or try to investigate where this money follow the money is one of the things we have to do. You cannot shut blind eye on who is financing disinformation.
The second point is I wouldn't concern with the access of regulation at this time because any regulation is so difficult to get. Europe has done a great job. Even in European countries the challenge to produce any regulation still great. In Brazil you have no regulation, no platform regulation at all. This is a fact that we are trying to face, and it's very hard.
We should consider that regulations for the digital era should be based on principles. I would like to bring another principle. There is a principle which is ‑‑ which lies behind the European regulation, which is systemic risk. This is very powerful because it is difficult to establish specific kinds of content to believe that we can through the algorithms find what is wrong and what is right or what is true and what is conspiracy theory. We will hold them accountable for the systemic risks they are putting in place in society. I believe that we can find a fair equilibrium to regulate content moderation through the principle of systemic risks. There is a name for this. I'm going to remind what's the name that this principle is ‑‑ has been used in some regulation, but I forgot. It's something like duty of care or something like this, and this is quite important. I would say.
Finally, to finish I would like to say the consultation that we have done in Brazil, and it's a consultation on digital plat fortunately regulation. We established three pillars for regulating the platforms. The first is disputability. As it comes to economic theory, we cannot sort out the information problems while we still have impact from companies who concentrate so much market share in society, like Instagram, WhatsApp, and we have to face the challenge of disputability. The second one is digital severity. You have to look to infrastructure. There is a concept of digital public infrastructure that is gaining hype. It's important to understand if infrastructures ‑‑ despite their product, they serve the public interests or business interests in terms that some are serving business interests over public interest. We should regulate this infrastructure despite they are private. To finish we have to regulate content moderation, and I guess this idea of systemic risk is a go ahead idea for we start discussing what kind of regulation we want in different countries. Thank you.
>> PEACE OLIVER AMUGE: Thank you. Let me just check if Poncelet wants to intervene. Poncelet, if you can hear me, do you have any comment or questions?
>> PONCELET ILELEJI: I think overall we have to realize that any disinformation in any electoral context impacts at a grassroots level. Whether we use fact checking sites, the most important thing is for communities to know how misinformation can affect and disinformation can affect their lives. People always listen to what they hear from their community, and we have to get into how to get to those people, and that's getting the stakeholders involved.
Social media has been a game changer, you know, so I remember in 2006 when they said interpersonal, you look at this today. It's very relevant. It's still you because the amount of information online, disinformation really contributed to a lot of very unfortunate things in the world. Especially in electoral processes. Let us see. I don't have any context on. The main focus should be addressing people at the grassroots level. Thank you. Huge, huge thank you.
>> I want to do talk about the right kind of candidate. She was talking about the right kind of candidate. I was contributing that from the perspective of the information, for example, about the candidate. It's not the right candidate, the right, right candidate that is ideal for that post. I meant that when one candidate, for example, is ‑‑ there is disinformation against another candidate, the chances that the two candidates, one of them will be disenfranchised in terms of the information ‑‑ the right kind of information, as of that time. I didn't mean that having the right candidate, the ideal candidate for that post. I wanted to clarify that.
And one of the things that we hope to look at is that regulations. There has to be some form of regulation. What we should be against is having over‑regulation in terms of whatever that we are doing. For example if you do over‑regulation in terms of people becoming ‑‑ (inaudible) ‑‑ that means you stifle innovation or growth of that particular space. I think regulation, transparency, having people being accountable, it will make this place a level playing field, and you will be able to interact and have the right, for example, to have your vote and also have the right for not anybody to stomp on your feet, on your toes. I think that is what we are looking at. We're looking at making sure that, for example, all these tech companies or platforms, because of the content that is posted by the end user. I think there needs to be some regulation. Just imagine if I walk into this room naked, yeah? There's no mention that has been written on the door that you can't walk here naked, but if I walk here naked, people will be, like, you know? I think we have to have some form of regulation, and these regulations have got to be facilitative regulations. They have to facilitate the tech companies as well as the platforms. People read the stuff. Now when they cross the line, the red line of allowing the platform to be used especially for disinformation because the disinformation is intentional. Like, for example, the misinformation. Disinformation is intentional. I created content, and disseminated it for me to disparage your personality. If you are a candidate, I would say this guy is a rapist, but the guy is not a rapist. If that content were to continue to be on the platform, the impact and the end users will be designated that particular candidate. I think that in my opinion as a speaker I think there has to be some kind of regulation accountability and collaboration is very key engaging the fact checkers to ensure that the information that has been put out there by maybe third parties is out there by a particular candidate. I think the awareness, collaboration, it's very key in terms of where we are going in the future. That will be my 50 cents contribution.
>> PEACE OLIVER AMUGE: Thank you.
>> Hello. This has become a very, very interesting discussion now. Yeah. We know how humans and a lot of people try to influence. That's the reason I don't believe that we should try and force the platforms to get rid of content because all these platforms it's run by community guidelines. It's available and visible for everyone.
I believe that it is more that society is helping to debunk this information through this kind of awareness campaigns that we are talking about because we cannot let just the platforms decide whether this is true or not. Especially when they don't have enough information. That is my take on that.
We have talked a lot about regulation, and then maybe we'll do the platforms accountable, but we have a bigger, bigger work to do that is some of the members from audience mentioned about communication and information integrity and through information literacy programs because this doesn't only impact the electrics. It's a general thing that we need to identify, like what is disinformation and misinformation, and how do you really understand the deep fakes? That is what I believe that we should be focusing on and have less influence on the platforms. Yeah. Thank you.
>> PEACE OLIVER AMUGE: Thank you very much. I think we've had a very good conversation. Actually starting something, but we can't start opening it up. She doesn't believe that we should emphasize content moderation because why should they have input there. I don't know what you think. We have only ten minutes. We can't go into this conversation. I have had fun moderating this session.
Just before I kind of sum up some of the things that have come up, I want to just give you one minute each if you want to say your parting shot.
(Inaudible audio).
>> There is debate in terms of battle with the platforms and tech companies and how they can root out the scourge of disinformation. I think that's important for any of us. I think awareness making sure we mitigate from the end user perspective is very key. I hope it brings awareness, fact‑checking platform accountability and facilitating information. Thank you.
(Inaudible audio).
>> This has become sort of ridiculous. You start to face the problems and tell each other what we have (inaudible). I don't know if you can move up. I would like to invite, and some of the hard copies of the work that we have done, consultation of digital platform regulation. We have kind of a ‑‑ in Brazil if you are interested. A scenario in Brazil of the main disputes. This is what we need to do to bring up what are the disputes in place and try to sort out these problems. Sorry for this final speech, but I'm really concerned. Thank you.
>> My final thought is that I do think that there is a strong regional and national specificity to these problems. The issue of disinformation is absolutely not the same inside Europe. The problems that I can observe in my country as an Italian are probably very different from what our Norwegians have an opinion about their country or from Eastern Europe.
What happens in each of these regions is very specific. I'm not even thinking about what's happening in the Maldives or in Brazil. One thing that I take away from this session is how the information of disinformation can become kind of an academic and very theoretical thing from some perspectives, and one of the most relevant issues from another perspective. There have been cases in the past few years where disinformation has had such a concrete impact and to be really a problem for the whole society, I do think one of the most difficult things is to agree on common ground at a global level. I'm sorry about that, but it will probably mean much more listening, much more discussing. I think it's great that ‑‑
(Inaudible audio)