The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> DAISUKE FURUTA: Hello. Welcome to this session, Sharing Existing Practices Against Disinformation (EPaD). Okay. My name is Daisuke Furuta, the moderator of this session, Editor‑in‑Chief, Japan Fact‑Check Center.
At the G7 Ministerial Meeting held in April, it was decided each country's efforts regarding countermeasures against disinformation will be shared and presented at the IGF here. So, this session is a result of that meeting.
I will explain the EPaD compilation, not only that, but today we have amazing panelists here and online for this session. We have Chay Hofilena from Philippines, Madeline Shepherd from Microsoft, and Shinichi Yamaguchi from Japan. I will ask how you work with countermeasures against disinformation.
First, I would like to share the EPaD. Okay. G7 countries and the EU are practicing many measures, so not all of them can be presented today. So, please use this QR code to see the list of all measures. The measures can be divided into four categories: Civil Society, social media platforms, research entities, and government. Today, I will introduce only those unique measures. I will show the QR code again later, so don't worry.
First, about Civil Society. From Germany, they support projects such as the European Statistics Contest for young students and webinar on mistakes dealing with statistics for journalists. This is unique, statistics training for journalists. You know, many journalists, including me, are not very good at numbers. So, yeah, it's really useful. Then from UK, they developed sources to build resilience to disinformation, for example, "Be Internet Citizens," fake news and misinformation advice hub and find the fake game Internet Matters. Of course, this is important to prevent sharing disinformation.
And from Japan, Japan Fact‑Check Center ‑‑ yeah, it's my own organization, JFC ‑‑ was established. Now we are a signatory of the International Fact‑Checking Network, so each country needs its own fact‑check organization.
And about social media platform. The social media platforms are such a big part of this issue, and so many countries have been introducing some measures. I think this is a significant change during these years. From the UK, promoting reliable information in such functions. For example, direct users towards gov.uk or Electoral Commission in the lead‑up to elections. And from EU, improving large platforms' accountability through the Digital Services Act, DSA. And from France, strengthen the accountability of platforms by requiring them to analyse the systemic risks generated by the operation of their services on the misinformation.
And then, research entities. From Canada, Know It or Not! This is a tool in collaboration with Digital Public Square, a project by the University of Toronto and MediaSmarts. And from Germany, integration of topics of official statistics in bachelor and master programmes. And from Japan, they released videos to raise awareness of anti‑‑fake news in April along with the G7‑related event, "Fake News and Japan."
And from government. EU, regulatory or co‑regulatory measures to ensure transparency and platform accountability, Code of Practice on Disinformation, Digital Services Act. And from the U.S., develop official digital communications channels that ensure credible, fact‑based information is publicly available. And from Italy, AGCOM, Communications Regulatory Authority established a working group aimed at fostering pluralism and freedom of information also on digital platforms.
Okay, so, yeah, that's just a brief summary of EPaD. So, please jump to this QR code I mentioned earlier for a list of measures. The list provides a variety of initiatives by multistakeholders, such as government, platforms, Civil Society, and the research institutions.
Okay, I will now ask each of you, the presenters, the speakers, to share what your own organization is doing to ensure a healthy information space. So, first, Ari.
>> ARIBOWO SASMITO: Thanks. So, good afternoon, Kyoto, Japan time, and good day, good morning, good evening, to whatever time zone you are right now, for those joining online. My slides, please.
So, for this presentation, I'd like to first thank the MIC for inviting me to be able to attend this such an important event. So, what I would like to present, since it's going to be just a few minutes, I'm going to present just the highlight. So, the title of my presentation would be the "Highlight of MAFINDO's Role in Today's Information Ecosystem."
So, MAFINDO established officially in November 19, 2016. MAFINDO stands for Masyarakat Anti Fitnah Indonesia, a non‑profit organization mainly fighting disinformation and providing literacy education. At the moment ‑‑ sorry, the writing is a little bit small from here. At the moment, our chapters or branches are established in 40 cities with approximately 1,000 volunteers.
So, this is how I would like to describe the information ecosystem. We have the platforms; we have the government and related; we have the media; and we have the consumers as the users or also commonly called the netizen. And MAFINDO are thankful, grateful to work with the three members of the information ecosystem. Let's move on to the first member of the ecosystem, the platforms. With Google, we are working on various programmes. You will see about some of them. Since this is just the highlight, it is not possible for me to share every programme that we are working with the platforms. With Meta, on Facebook and Instagram, as the IFCN‑certified organization. MAFINDO is one of the 3PFC, or the third-party fact‑checker partner. By the way, IFCN is the International Fact‑Checker Network.
With WhatsApp, this is also quite common by other fact‑checker organizations. WhatsApp Chatbot is currently quite popular, because in many countries, Indonesia would be one of them, we are providing the services through WhatsApp Chatbot. And at the moment, this is the most popular platform. Starting from early this year, we are working on several programmes, some of them as the Safety Workshop trainings, the Trainings of Introduction to Disinformation for Content Creators, FGD Sessions, the focus group discussion, expert roundtable, NGO Day event, et cetera.
With the government, this is the next member of the information ecosystem. During the pan or infodemic, we're working not just with the Indonesia COVID‑19 Task Force, but also with WHO, UNESCO, UNICEF and CDC, and not only on fact‑checking; we are also working on some backstage unseen by the public, meaning the work is not displayed, it's not shared in our social media accounts. The activities, such as the Misinformation Inoculation Training, and providing SML, or the social media listening data.
The next member of the ecosystem is the media. CekFakta.com is a unique platform. It is a collaboration platform where MAFINDO works with more than 20 Indonesia's national media, supported by AMSI, the Indonesian Cyber Media Association, and also with AJI Indonesia, Independent Journalist Alliance of Indonesia, and also GNI, Google News Initiative.
On a daily basis, we are sharing resources, coordinating, sharing fact‑check articles, and also running other activities such as fact‑checking trainings for journalists and fact‑checkers and also digital literacy and fact‑checking trainings for the general public.
The last member, but not least, of the information ecosystem, the consumers. Working with CSO/NGO or community as the representation of the public. MAFINDO is working with Siber Kreasi, the Indonesia National Digital Literacy Movement, where more than 100 organizations are collaborating for digital literacy education.
Also, we have a programme called the MEDIA, which is Media Empowerment for Democratic Integrity and Accountability, our programme with USAID. We established the PESAT as the Paguyuban Ecosystem Informasi Sehat or community of healthy information ecosystem, in several cities to harness existing communities to gather them in collaboration.
And now just with the representations, we are also directly working with the public on the fact‑checking or debunking. Professional fact‑checking, these are the full‑timers working with the community fact‑checkers on the cross‑sourcing aspect. Their work is stored in a central database. We have a turn back hoax ID site which is open and accessible by the public through security HTTP, meaning you can browse it using browser. Also, it's available in RSS feed, and also, you can ask for us with the API key. Usually, this is related with, you know, Webmaster or website's owner. And from turn back hoax ID, we publish it through the social media accounts. We have X, Facebook, Instagram, TikTok and also other social media accounts. Not only that, we are working with National Radio's Weekly Programme. Some of the radios, some of them also have the podcast. So, the podcast is something you can play back many, many times. It depends on when you want to listen. Also, we have the hoax buster tools apps and once again, every with the chatbot, WhatsApp.
And just on debunking. This is at the moment has become a trend. It's called prebunking. Prebunking is a preventive way to empower by inoculating against misinformation. So, it's kind of like debunking misinformation before the misinformation actually appears. Prebunking is proactive, and debunking is reactive. So, prebunking is actually something that is able to prevent misinformation. So, with fact‑checking, this is part of the initiative in MAFINDO and we provide the fact‑checking training in several cities. This time it is for the purpose of preparation of the general election next year, because usually misinformation are often tightly related to the disturbance to elections, political events.
So, one thing that most likely you already know is that ecosystem, the member of the ecosystem depends on and influence each other, which means there are no single course of the current condition of the media ecosystem, so there is no single cost. Everyone contributes and depends on each other. So, please keep in mind something that most likely you already know, because in the previous sessions, we are also being reminded that governments, private sectors, technology sectors, private company and everyone in CSO and other movement is to collaborate, collaborate, collaborate. Thank you.
>> DAISUKE FURUTA: Thank you, Ari.
(Applause)
Now, fact‑checker organization, echoing my Japan Fact Checking Centre, we are working not only for fact checks, but also for media education and we need collaboration. MAFINDO is a great example. I learned a lot from MAFINDO when I started Japan Fact‑Check Center. Thank you.
Next, Chay, next, please.
>> CHAY HOFILENA: Yes, hello. Let me share my slides for a moment. Give me a minute, please. Sorry. Are you seeing my slides? Okay.
>> DAISUKE FURUTA: Yes.
>> CHAY HOFILENA: Okay, finally. I'd like to introduce myself briefly. I'm Chay Hofilena. I'm the Investigative Editor of Rappler. I also handle training, and I'd like to share with you ‑‑ and I'm also one of the founders of Rappler, along with three other women, when we started the Rappler about 10‑11 years ago. I'd like to share with you the journey of Rappler in the fight against disinformation.
So, Rappler started on Facebook in 2011, but we created our own website in 2012. When we started out, we identified three important pillars of the organization. One is journalism. The second is community. And the third is tech and data. We believed then, as we do now, that journalism needs a community to thrive, most especially now when the trust levels for journalists and journalism as a profession, as a whole, has dropped tremendously and dangerously.
Rappler journalists, we've told them, we've told our young reporters that they need to be comfortable with technology and data if they want to be at the head of the game and they want to do cutting‑edge journalism. So, by mutually supporting and reinforcing each other, Rappler journalists working with community and using technology and data, we hope to be able to build stronger and better communities of action that can bring about change. This is, after all, the essence of journalism.
So, we are purely online, and we turned 11 only last January. We have been recognised for investigative and data stories that we do, and we are also a verified signatory of the IFCN Code of Principles, and we are one of two fact‑check partners of Meta, or Facebook, here in the Philippines. The other one being Verifyles. We have remained independent, despite the partnership with Meta, and we've done investigations on the platforms since 2016. And for accountability purposes, we have adopted a corrections policy since 2012.
Let me tell you about what we've tried to build these past few years. This did not happen instantaneously. This is, even as I speak, we continue to build and to create and to expand this network. So, what we've created is what we call Facts First PH. It's really a community built around fact‑checking and facts‑based reporting.
There are different layers, as you can see in the pyramid. The first layer is fact‑checking. The second layer is mesh. The third layer is research, and the fourth layer is accountability. So, essentially, it's media, civil society, academia, and even lawyers.
As Maria Ressa said in her Nobel speech, "Without facts, you can't have truth. Without truth, you can't have trust. Without trust, we have no shared reality and no democracy." So, it's really anchored on truth‑telling and democracy. So, you will see at the very base of the pyramid this fact‑checking.
What we've done here is we've brought together different newsrooms and journalist groups, not only in metro Manila, but even in the provinces, and we've also brought in ‑‑ we've worked with volunteers to help us in the fact‑checking effort. To expand this network, we've also done webinars online. We've found that it's more efficient and we're able to reach more students, teachers, and professionals when we do the fact‑check webinars. And once they graduate, those who go through the webinars ‑‑ the fact‑checking webinars ‑‑ many of them become volunteers, although different varying degrees of activity.
But beyond the newsrooms, we've also expanded the group, the effort. So, we've tapped NGOs, business, and faith groups, that help spread awareness about disinformation and misinformation operations. As I said earlier, this is still work in progress.
Universities, we have also brought in universities and researchers, because after all, people from professors and researchers from academia are also interested in disinformation, except that they have difficulty popularizing their research. So, we've teamed up with some universities, and we've published their ‑‑ we've popularized what we call storified their research and published them on our website.
And finally, we also have pulled in lawyers and other legal groups. They help journalists who have been attacked, who have been trolled and threatened online. Filipino journalists, most especially under the Duterte administration are accused of being Communists. The term we use is red‑tagging, so that's very common. So, lawyers have come to the defense of some of these journalists, because they believe that journalism must survive if democracy in the Philippines must survive. So, Facts First BH in general is really a multisectoral approach to fighting the infodemic. The appeal of this community is to fight the lies that weaken institutions, and ultimately, democracy.
We also have a very, very young population. The audience, the Rappler audience is very diverse, but the majority of our readers come from the 18 to 24 age group, extending a bit to the 24 to 34 age brackets. We have felt the need to go beyond that. That just does not work anymore, especially for a population that has a very, very short attention span. The younger generation don't read. They don't read long form. They are attracted to video. They like things that are very, very visual. So, we've adapted to that. We've adjusted, and we've used visuals like cartoons, as you can see on this slide. These were cartoons that were shared, that were created and shared during the campaign period preceding the 2022 presidential elections. The hope here was that we would make fact‑checking a little more interesting and engaging. So, the cartoonists and the comics creators became very, very active during the campaign period. Whether or not they were successful is another question altogether, because we know who won the presidential elections in this country.
We also tapped what we call influencers online. We wanted to go beyond our usual echo chambers. What was important in working with select influencers is that there was supposed to be a shared value for truth‑telling and certain principles that they also shared. So, the objective's to go beyond the echo chambers and to reach communities that these influencers have access to. So, through them, we were hoping to reach new audiences. So, they cut across age groups, from the young to the more senior readers and followers, and these influencers have established a degree of credibility also among the youthful and even the more mature audiences, and we invited them to be part of the community, and they obliged.
We also found that TikTok, as was shared earlier, has become an exponentially popular platform where we need to be to reach a more diverse audience. The messages essentially have focused on debunking falsehoods and providing useful information.
So, we turn to TikTok. We realize that fighting disinformation is not just reacting to lies that are being spread online, but it also means having to condense, to explain, to summarize, and to popularize very, very complex issues. Not very easy to do, because how can you explain a very complex issue in one minute or 30 seconds or one minute and 30 seconds? It's really, really very, very challenging, but we've been forced to adapt and to adjust to our audience and our readers and to use the platform.
I will admit that, initially at the start, we were very, very hesitant to use TikTok because of privacy and data concerns, but our audience has shifted there, and we know that, just like in Meta, we are also very dependent on TikTok's algorithm. But it's a difficult choice. We decided that we cannot not be on TikTok.
So, today, our young researchers and reporters alike, and the more senior Rappler journalists also use the platform to explain issues, such as the fact‑checking process, which you see on the left; in the middle, one of our researchers tried to explain the use of, or rather, the misuse of confidential funds in the budget, and this was in reference to Vice President Duterte. And in the third, we also tried to explain the importance of making audit reports very transparent. So far, the feedback has been quite positive, and the views have just been tremendous. So, maybe, you know, we just have to balance things, but so far, so good.
Finally, we've also tapped legal groups and lawyers, and we have, as I mentioned earlier, these lawyers are concerned about journalists, especially those who have been harassed and intimidated and been accused of being Communists. This one particular group of lawyers, very active today. Their group is called the Movement Against Disinformation. They helped file a case against Meta to compel it to disclose information about anonymous accounts that attack the Editor‑in‑Chief. The Editor‑in‑Chief is the guy on the left, the first image there. He's the Editor‑in‑Chief of a provincial publication. So, he said, let me know who my anonymous attackers are. But of course, Meta has refused to disclose the information. But at least the effort is there, and we will see where it will go.
Another case involves a former government official and her co‑host who had defamed another journalist and accused him also of being a Communist. So, this is the fad nowadays. You're a Communist if you criticize government. So, what he did was he filed a civil case. And take note, it was not a criminal case, but a civil case, for damages, because he does not believe in criminalizing libel.
This is essentially part of the pushback against the spread of disinformation and the aggressive attempt to further weaken the rule of law and suppress Democratic discourse. This is work in progress, and we hope that the community continues to grow. Thank you.
>> DAISUKE FURUTA: Yes, thanks, Chay.
(Applause)
This is another case of correlation, Fact First PH working with journalists, lawyers and other professionals. And especially, it is essential to work with influencers on social media, like TikTok, to reach out to young people who are vulnerable to disinformation.
Okay. Maddie, next, please.
>> MADELINE SHEPHERD: Thank you, Daisuke. And thank you so much for the opportunity to present today. I'm just going to share my screen. Can I get a thumbs up if that has worked? Yes? Fantastic! Okay. So, today I'm just going to provide a very brief overview about Microsoft's existing efforts to combat disinformation. And it's really heartening to hear about the fantastic work of organizations like Mohinder and Rappler. And this is centered on the fact that ‑‑ sorry, can you still see my screen? No. Sorry. Let me go back. Apologies. Apologies. Okay. Thank you for your patience.
Okay. So, absolutely, Microsoft believes that the private sector has a responsibility to proactively and constructively engage in supporting democratic institutions around the world and the collaborative approach has been alluded to already, so we will build on that in our overview.
So, on this screen are the five principles that really guide our work when it comes to preventing disinformation. And they illustrate our role as to where we think the private sector can add value, in addition to the important work that Civil Society organizations and government are already doing in this space. We think it's absolutely crucial to be leveraging technology to help democratic institutions, because quite often it is technology that is causing some of the challenges in the first place. We want to play a leadership role in industry and make sure that other companies in other parts of the private sector are also doing their part. We think it's very important to develop strategic partnerships that do cut across sectors, including partnerships with Civil Society and government. Of course, be non‑partisan in our efforts and always be working to support democracies around the world.
So, all of our efforts, when it comes to disinformation at Microsoft come out of what we call the Democracy Forward Initiative, and this initiative works to preserve, protect, and advance the fundamentals of democracy by promoting a healthy information ecosystem, by safeguarding open and secure democratic processes, and by advocating for corporate civic responsibility, both from ourselves and from other companies in this space.
I think we all acknowledge very strongly that disinformation erodes trust in the information that we rely on to keep us alive, often. And unfortunately, the local news outlets that many of us previously turned to are disappearing. And so, Microsoft and many other companies are dedicated to supporting a healthy information ecosystem where we can still access news that is trusted and information that is credible.
In June 2022, Microsoft actually announced its pilot Information Integrity Principles, which outline how we approach disinformation from foreign actors across our products and services. And just quickly, these four principles are: Freedom of expression ‑‑ so really making sure that we uphold our customer's ability to create, publish, and search for information using our platforms; the importance of authoritative content ‑‑ so we're always trying to prioritize the surfacing of content that will counter foreign cyber influence operations or disinformation campaigns; demonetization ‑‑ so, we will never willfully profit from cyber‑influenced content or disinformation actors; and then, the fourth principle is proactive efforts ‑‑ so, we're always exploring opportunities to work more proactively to prevent our platforms and products from being used to amplify foreign cyber influence or disinformation campaigns.
So, the Democracy Forward Initiative collaborates with teams all across Microsoft, but also external partners to increase societal resilience against disinformation and develop technical solutions and drive impactful thought leadership. So, we do this under a number of different areas. The first one being societal resilience. And here, we're really focused on the development of partnerships across industries to create whole‑of‑society approaches to address the challenge that is disinformation, which, as we all know, is really a whole‑of‑society problem.
One example of a partnership here is our partnership with NewsGuard, which is a third‑party site that provides credibility ratings and detailed nutrition labels for thousands of news and information websites around the world. And these websites at the moment are quite concentrated in Europe and English‑speaking countries. And in fact, the websites account for 95% of online engagement across the United States, the United Kingdom, Germany, France, and Italy, so there's a lot of value in being able to provide what we call nutrition labels around the content that people are seeing in these countries.
We are also, of course, signatories to the European Union's Code of Practice on Disinformation, and we have actually just published our report for the first half of 2023. The report notes that more than 6.7 million fake accounts were blocked on LinkedIn or prevented from being created in the first place, in the first half of 2023, and that Bing Search promoted authoritative information or downgraded less authoritative information in relation to almost 800,000 searches relating to the war in Ukraine. So, there are just some examples of proactive efforts that we have leveraged on our own products and services.
Another key area of our work in the information integrity space is really data integration and working with internal and external stakeholders to detect and learn from disinformation campaigns and leverage these findings to develop new solutions to help take these actors offline. We're consistently conducting research and creating reports on threats and the attacks that Microsoft and our Digital Threat Analysis Centre have taken action against, and we're increasingly looking at the intersection between cyberattacks or security breaches and information influence operations. And I think it's fair to say the traditional techniques used by information and cyberattacks are now being deployed by those running information influence operations and targeted disinformation campaigns as well.
And then, finally, technical solutions are a really important part of the information integrity approach. Microsoft is actually a founding member of the Coalition for Content Providence, and authenticity, the C2PA, alongside companies like Adobe, Intel, Twitter, the BBC, and many other companies.
Earlier this year, the Coalition actually launched its first version of an open source content provenance tool, which allows them to claim ownership while empowering consumers to make informed decisions about what kind of digital media they should trust. And we think this is really important, as more and more of us are using generative AI. There needs to be a lot of transparency around what content has been generated by AI, so consumers have that knowledge.
Another really important aspect of our work is around information literacy. And our goal here is really to build trust in the information ecosystem by enhancing the skills that consumers have when it comes to media literacy and also just consuming information. We see this part of our work as helping to address what we call the demand side of disinformation, so obviously, there's lots of work we're doing on the supply side to try and, you know, target the campaigns and gain Intel; but on the demand side, we really need to be building resilience in the population to be able to actually intake information in a critical manner, and that's in addition to the great work that other Civil Society organizations do.
Here, we have a multilayered approach, and that includes partnering with lots of different organizations to embed information literacy campaigns and concepts into products and training around the world, utilizing our end platforms to help educate consumers on how to find and consume trusted information in a correct way and also sourcing, developing, and sharing best practices based on industry research, both internally across our company but also with external partners across the information space.
A couple of quick examples of how we've done this is by providing in‑client advertising space across various platforms that we have, including Microsoft Start and Outlook to organizations that promote information literacy, resources, and skills. In the programme's first 12 months, we actually reached over 130 million Microsoft consumers with information, literacy resources, and skill campaigns.
And in 2023, a little bit later this year, we're very excited to be launching a Minecraft education information literacy game along with accompanying educator materials, which we know will be very popular amongst younger children, starting to giving them the skills and resilience they need to be critical consumers of information. So, I will leave it there, and looking forward to the discussion. Thank you.
(Applause)
>> DAISUKE FURUTA: Yeah, thanks, Maddie. Yeah. As Maddie said, technical solutions are very important to disinformation spreaders using AI. And so, we need AI and other technologies to prevent it, and performers are the best at these things.
Okay, next, Shin, please.
>> SHINICHI YAMAGUCHI: Thank you. Hi, everyone. I'm Shinichi Yamaguchi. Today I'd like to talk about disinformation and misinformation in our society. I'm sorry, but I'd like to speak in Japanese, so could you please use this one? Thank you.
Let me introduce myself. I am working with GLOCOM, an international research institute. It is a research of social science. I have an economics PhD and also the computational economy is my field. Using this method, I look at the disinformation, misinformation, and online bullying. These are some of the empirical studies I do for the realistic society.
And when it comes to this topic today, together with the Japanese government, I have engaged in various joint research and media for literacy textbooks were also edited with the government.
What I want to talk about today is the thing I did with Google Japan. It is an Innovation Nippon. Some of the outcomes of this research. This joint project started in 2015, and from 2019, we shifted our focus to misinformation and disinformation. Every year, we do surveys of more than 10,000 respondents to analyse the people's behaviors.
The thing I want to share today is the focusing on the 2022 and 2023, the latest outcomes of our research. This was about vaccinations of COVID‑19 and the political information. We picked up six misinformation and six conspiracy theories, and so in total, we have 18 survey items to analyse the people's behavior.
First thing we understood is that political misinformation and disinformation, after encountering them, those who understand that it is wrong, it is incorrect information, only 13% of the respondents understood that they are encountering misinformation or disinformation. Even the conspiracy theories and the COVID vaccine misinformation, only 40% of them after reading or exposed to this information realized that they are getting the wrong information. And also, people in their 50s and 60s are more vulnerable to this misinformation and disinformation. So, this issue is not limited to young people alone.
After reading this misinformation and disinformation, 15% to 35% of them shared this information after reading them, and then how they shared this information. The most frequent method was direct conversation with the people around them. So, as the misinformation/disinformation are spreading beyond the Internet, this is the total of the ecosystem problem of the entire society.
Now, we can use the mathematical models to do the analysis of how disinformation spreads. And misinformation and disinformation, if people believe, they are more likely to spread it, compared with those who thought that it was an incorrect information. And also, the people with lower literacy are more likely to spread disinformation and misinformation.
Our information environment is greatly affected by the people with lower literacy and also those who are deceived to believe misinformation and disinformation.
Then, what kind of impact does this misinformation and disinformation have on society? We used two particular disinformation/misinformation, and we compared people's perception between before and after reading this disinformation and misinformation. One misinformation is the something that is negative against the conservative politicians; the other was misinformation/disinformation that is the negative for the progressive politicians. And then quite a few people changed their opinions after reading disinformation. They no longer support these politicians after reading it. And especially the mild supporters are swayed much more by reading disinformation and misinformation. As you may know, these mild supporters are actually the majority of the voters. They have a big impact on the result of the elections. These people are swayed by misinformation and disinformation, and they change opinions easily. This means that disinformation/misinformation are very influential on the result of the elections.
Now, recently, generative AI is widely used, and then this is going to intensify this issue, because now, everyone can manipulate the public opinion by using fake images and videos. One concern I have is that will AI‑generated misinformation and disinformation, they are no longer able to be told as is such by humans. We are not able to tell whether they are AI generated or not, so we needed to have a new technology to detect, what is the AI‑generated information and misinformation.
Now I would like to talk about what our organization is doing. We do these verification studies together with IT companies and also the Japanese government. And the outcomes are shared with many stakeholders. Beyond that, as you can see on the left, with the Japanese government, we prepared these kinds of educational materials to help students fight against disinformation. And also, we have the collaboration with YouTube Creators with the topic of mis/disinformation for education purposes. And also, as you can see on the right‑hand side, we hold some events where many stakeholders can have a discussion, and the outcome of the discussions are shared generally with the public.
And there are many other initiatives in Japan. For example, multiple stakeholders are invited in the committee to have a discussion on mis and disinformation and there is a Japan Fact‑Check Center in which Mr. Huerta is the Chairperson, that is also the member of IFCN.
There are many things that we can do going forward. First of all, we needed to secure transparency, not only global level, but also at a local Japan level. Different countries also have to have its own transparency platforms, and media and information literacy education have to be expanded, and we needed to come up with a technology to counteract these problems.
Number four, we needed to set up the mechanism to prevent the ad revenue from flowing to the site of disinformation. We needed to do the fact‑checking initiatives more efficiently.
And then number six, we needed to engage a lot of the stakeholders for the collaboration. And of course, the international cooperation is very important. The Ministry of Communication announced EPaD, and I think that will play a critical role as a foundation. Thank you very much.
(Applause)
>> DAISUKE FURUTA: Yeah, thank you, Shin. This data shows not only that literacy is important, but also what kind of literacy is useful and how. Intelligence‑sharing of such data is very important to make measures more effective.
Okay, now we only have seven minutes for discussion, so I have a lot of questions for each of you, but I think I can have only one question to you all. Okay, so, my question to you all is, so, what is needed to deepen international cooperation, not only inside your country, but also globally? So, any opinions would be greatly appreciated. So, how about you? Do you want to start?
>> ARIBOWO SASMITO: Yes. In the previous session, they said it's time to start. So, I think after this good event, such an important event, like what Maria said, let's do some concrete steps so we can actually start on handling disinformation and misinformation.
Earlier, I chat that basically everyone is in the same point. So, disinformation/misinformation, no matter in what country, basically, it is the same. It's just more like local context. So, I think it is a good start, good location for the Internet to start on the any type of collaboration on handling misinformation by the member of the ecosystem that I previously presented. Thanks.
>> DAISUKE FURUTA: Yeah, thank you. Actually, now we, yeah, sometimes exchange our knowledge for fact‑checking inside Asian organizations. So, what do you think, Chay? How can we deepen our collaboration cooperation?
>> CHAY HOFILENA: Yeah, I think what's essential really is to strengthen journalists and news rooms, because that's what ‑‑ it's a journalist who produced the information. Earlier, it was mentioned that there's prebunking. So, related to prebunking is really the ability, making sure that journalists and newsrooms not just in specific countries but in the region and even worldwide have the tools and have the resources to be able to do the job that they need to do, and that includes investigations and being able to track players and actors, those who are part of the disinformation network.
Doing this is very expensive, and not all journalists have the skills or are able to do it, and not all newsrooms have the resources to be able to do these types of online investigations. So, if there are people, if there are private groups, companies, or even IT companies who have the resources and can share that, can share those with journalists, that would go a very, very long way. We have to be able to do our jobs well. And if we're able to do our jobs well, then we can be able to proactively prevent the spread of disinformation.
>> DAISUKE FURUTA: Thank you. I have an additional question. Independence is really important for journalists, news organizations. So, do you have any advice on how to deepen cooperation with other organizations while maintaining the independence?
>> CHAY HOFILENA: Well, at least for southeast Asian newsrooms and even Asia, like Rappler has offered fellowships, for example, and this is also with the help of grants and funding. This is upskilling of reporters and journalists. And we've found that this isn't just in the region, that even in the Philippines, the skills of journalists are very, very uneven. And if we're able to work together and share what we know best, especially with the advent of AI. AI is going to be a very, very serious threat to newsrooms. We have to be prepared to be able to deal with it. So, the collaboration and the sharing, I don't know if this can be done through training or even exchanges. Maybe reporters can work in one newsroom for a specific period of time, just to be immersed and to know what newsrooms are capable of doing, and then that the skills can be shared with other colleagues. That could probably help.
>> DAISUKE FURUTA: Thank you. Maddie, as Chay said, AI technology is essential, but many newsrooms are reluctant to hiring their engineers. They are not good at using technology. So, how the Microsoft and performers can support the newsrooms or fact‑checkers with technology?
>> MADELINE SHEPHERD: Yeah, it's a great question. And I think that it really does emphasize the importance of these kinds of conversations where we just have the opportunity to connect industry and civil society who have excellent ideas in this space, but perhaps lack the resources to make them scalable and take them to all corners of the world and to all sorts of different newsrooms. So, I think these sorts of initiatives are really important starting point to bring us all together. But then, absolutely, technology companies, you know, technology has disrupted the way that people get their news, so there is an obligation for some work to be done to kind of support news rooms as they advance into the next chapter. And AI will be beneficial to many of them, but it will also present lots of new challenges. And so, those partnerships between technology companies and journalists are very important, and Microsoft has lots of those, and we're always interested in identifying new journalist partners in other countries. So, I welcome anyone to reach out to me after this panel today.
But I guess the other point is, that's all looking at sort of the supply side of the information and where the information is coming from, but I think another really important aspect for us is trying to tackle that demand side. So, you know, particularly with young children and generations growing up with AI, making sure that they develop their digital resilience and information literacy skills that they will need to actually use this technology in their lives as they move forward in a responsible way.
>> DAISUKE FURUTA: Yeah, thank you. I think there are many journalists and newsrooms really interested in their working with platforms for using the AI and technologies. Thank you.
So, the question for you is, from the researcher's point of view, what is collaboration and measures needed for our information ecosystem?
>> SHINICHI YAMAGUCHI: That's right. The problem of misinformation or disinformation doesn't close within the country, for sure. If there's a problem overseas that can be imported, the vice versa is also true, and there's a lot of, you know, impact that is spreading across the border.
Now, AI‑generated content is going to be having more volume than people‑generated information, very quickly. So, that is like misinformation/disinformation volume can exponentially go up. So, in terms of AI as a keyword, how do we leverage AI? How do we want to bring safety and security in AI? How do we want to have a development process and development standard for AI? We need to have an international standard for that. So, sometimes we may have to work together to put the set of rules together for the development of AI and for fact check, too, right? Fact check organization. If international collaboration is possible, I think the fact‑checking process can be much more efficient.
And somebody has mentioned previously, misinformation and disinformation detection process, approaches to identify or to the fact‑checking process, knowledge sharing, crossing borders would be something very helpful. Thank you.
>> DAISUKE FURUTA: Thank you all. We are running out of time, so let me wrap up the session. Information transcends borders, so measures should also cross borders. What came up in today's session was multistakeholder collaboration. So, we would be happy to have exchanges with the audience after the session. Thank you for your participation. Please give a round of applause once again to all of the speakers. Thank you.
(Applause)