The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> CHANTAL JORIS: Good afternoon, everyone, all the participants in the room, also good morning, afternoon, or evening for everyone joining online. My name is Chantal Joris with freedom of expression ARTICLE 19 and I will be moderating this session today. In today's session, we want to explore some of the current challenges posed to the free flow of information, specifically during armed conflicts. And I want to start with making a couple of opening remarks as to where we are at.
We do know that conflict parties have always been very keen to control the narrative and shape the narrative during conflicts, perhaps to garner domestic and international support, to maybe portray in a favorable light how the conflict is going for them, and of course, also often to cover up human rights violations and violations of international humanitarian law. So, this is nothing new. Yet, what has changed, of course, is how armed conflicts look like in the Internet age. We see an increased use of digital threats against journalists and human rights defenders, mass surveillance, content blocking, Internet shutdowns, and even the way that information is manipulated has become much more sophisticated with the tools that parties have available today.
And of course, at the same time, civilians really rely at an unprecedented level on information communication technologies to keep themselves safe, to know what's going on during the conflict, where fighting takes place, and also to be communicating with the people that ‑‑ with their loved ones and see that they are okay.
And also, I want to emphasize a little bit that these issues are not necessarily limited to just sort of the top five to ten conflicts that tend to make the headlines, but there are currently about 110 active armed conflicts in all regions of the world. And also, beyond conflict parties, even states that are not part of the conflict have to grapple with questions. For example, we've seen recently, should they sanction propaganda, bans for media outlets. So, this is an issue that concerns every state, all states and the whole world.
And also, what we have seen is that digital companies have become increasingly important actors as well in conflicts, and they do need to find strategies to avoid to become complicit in human rights violations and violations of humanitarian laws.
So, to discuss some of these challenges, I'm very happy to introduce the panelists of today. Also, I do want to make a quick remark in this context that we notice that many of our partners from conflict regions have not been able to come to IGF in person and have these discussions in person, although we talk a lot about the need for an open and secure Internet, including, of course, during conflicts, and they are often the stakeholders that are most affected, and they are not really able to join these discussions, except online.
Similarly, some of our speakers ‑‑ most of our speakers on this topic that we really wanted to have at the table are also joining us online today.
The first speaker joining us online is Tetiana Avdieieva. She's Legal Counsel at the Digital Security Lab Ukraine, an organization that has been established to address digital security concerns of human rights defenders and organizations in Ukraine.
We also have Khattab Hamad, a research focusing on Internet rights and Internet governance, who is working with the Open Observatory of Network Interference and the Code of Africa.
We have Joelle Rizk joining us, from the Protection Department of the International Committee of the Red Cross. And next to me in person is Elonnai Hickok, Managing Director of the Global Network Initiative, of which ARTICLE 19 is also a member. I will let her also introduce what this multistakeholder initiative is all about.
Also, we were supposed to have here Irene Khan, special Rapporteur on Freedom of Expression. Unfortunately, she had to be in New York at the same time in person, and we were struggling to remove her from the programme, so apologies for that. But she has been focusing on these questions as well, and I encourage you to read also her report from last year on Disinformation in Armed Conflicts. And she continues to engage in this discussion as well.
So, quick breakdown of the format of the session. So, we have about 75 minutes to discuss these challenges. I will address a couple of questions to the speakers, but it is really meant as an interactive discussion. It is meant to be a roundtable. So, I will also be asking some of the questions to you as well after the speakers have been able to express themselves on the issues, so throughout the discussion, and then at the end there will be a chance, obviously, to give input also what we might have missed, what other questions there are for the speakers.
So, perhaps, let's start with discussing sort of the main digital risks that we see and also the risks to the free flow of information during conflicts. And I will first have Tetiana from Ukraine and Khattab from Sudan talk about this, but also, again, I will be very keen to hear from you what in your areas of work or from the regions you're from, what you have been observing as sort of the key challenges in this respect.
So, Tetiana, if I can start with you.
>> TETIANA AVDIEIEVA: Yeah, hi, everyone.
>> CHANTAL JORIS: Hi.
>> TETIANA AVDIEIEVA: It's my great pleasure to be here today and to talk about such an important topic. So, first of all, I wanted to share, like, a brief overview of what is going on in Ukraine currently regarding the restrictions on free speech, free flow of information and ideas, which were introduced long before the full‑scale invasion, since the war in Ukraine started in 2014 with the occupation of Crimea and after the full‑scale invasion as a rapid response to the change in circumstances.
So, basically, restrictions in Ukrainian context can be divided into two parts. The first part concerns the restrictions which are related to the regime of the martial law and regulations from the international obligations. And the second part relates to so‑called permanent restrictions. For example, there is a line of restrictions based on origin, particularly concerning Russian films, Russian music, and other related issues.
Also, there are restrictions serving as a kind of follow‑up of Article 20, for example, Prediction of propaganda for war, prohibition of legal aggression, et cetera. The problem is, especially with the restrictions which were introduced after the full‑scale invasion, that restrictions drafted in a rush are often poorly formulated, and therefore, there are lots of problems with their practical application.
However, what concerns me the most in this discussion is the perception of the restrictions of the kind by international community. The problem often is that people don't take into account the context of the restrictions. And when I'm speaking of the context, it is not only and purely about missiles flying above someone's head; it is about the motives which drive people to be involved into the armed conflicts, and that is a very important reservation to be made at the very beginning of this discussion, because we have to speak about the root causes.
And I often make this comparison. For me, armed conflicts can be compared to the rules of saving energy; that armed conflicts do not appear from nowhere, and they do not disappear anywhere. So, when, for example, a certain situation starts, we have to understand that there are motives behind the aggression on the side of the aggressor, and therefore, we have to work with those motives to prevent further escalation and to prevent repetition of the armed conflict, to prevent re‑escalation, basically.
In this case, assessment of the context is, unfortunately, not a basic mass, it is rather rocket science. In the Ukrainian context, the preparation of the fertile ground for propaganda, for Russian interference has been done in the information space for at least the last 30 years of Ukrainian independence. When on the entire European level, it was said that Ukraine is basically not a state and that there is no right to sovereignty, and that was basically a gift to Ukrainian nation; that all the representations, like in front of the international community, from the side of the post‑Soviet countries, were done by Russia, et cetera, et cetera, et cetera.
What does it mean? It means that there was a particular narrative which was developed, a narrative with which we have to work. Why this is important? Because, usually, restrictions are treated, I would say, rather in a vacuum. So, we are trying to apply the ordinary human rights standards to the speech which is shared, developed, to the narrative which is developed in the context of the armed conflict. And it is very important, because at the very end of the day, what any country which is in the state of war faces is the statement that, as soon as the armed conflict is over, all the restrictions have to be lifted. And here we miss a very important point, the point about the transition period. So‑called exit strategy, which is very frequently substantiated by automatic cancellation of the restrictions. And that actually is the part of the discussion on the rebuilding of Ukraine, in terms of, like, reinforcing the democratic values, re‑establishing human rights which were restricted, et cetera.
So, at this particular point, it is very important to mention that we have to think about the transition period of lifting the restrictions from the very beginning of the armed conflict, because when the restrictions are introduced, we have to understand that they cannot end purely when there is a peace agreement. Otherwise, it won't make any sense from the practical standpoint, because narratives will still be there in the air. Therefore, we have to develop this exit strategy and understand that post‑war societies are very vulnerable towards any kind of malicious narrative, and they cannot be left without protection, even after the end of the war.
And finally, a brief overview of the digital security concerns. I will try to summarize it in one minute, not to steal a lot of time. Currently, there are lots of problems from the digital security side. For example, there are attacks on databases, attacks on media, which not only target the media's website for sharing information, but also target the journalists, which is more important, because people experience a chilling effect, and they are super afraid of serving any kind of idea, because they potentially might be targeted.
Indeed, I mean, from the side of the aggressor state, because currently, in Ukraine, at least ‑‑ in the Ukrainian context ‑‑ the biggest threat is stemming from Russia, especially for those journalists who are working on the front line and who can be captured, who can be tortured, who can be killed, and there are lots of examples of such things happening.
Also, there is a problem of digital attacks on websites, which actually interrupts the work of the websites and disables the sustainable connection. There were attempts to share mall wear again in order to track individuals, in order to check what topic they're working on and in order to prevent, basically, the truth to be distributed to the general public.
And finally, there are coordinated disinformation campaigns on social media, on, like, platforms, messaging services, including Telegram, which is another important topic, and probably this topic is a topic for a separate discussion, so I won't be stopping on that, like for my entire speech, but just mention it for you to understand that this discourse is very extensive and there are lots of things to talk about.
I will stop here. I will give the floor back to Chantal. Thank you very much to listen to me, and we'll be happy to share further ideas in the course of the discussion.
>> CHANTAL JORIS: Thank you very much, Tetiana. Khattab, if I can bring you in and have you share also your observations about the situation in Sudan, also following the recent outbreaks of hostilities a couple of months ago.
>> KHATTAB HAMAD: Thank you, Chantal. Hi, everyone. So, I want to welcome you and the other participants, and it's really an honour for me to speak at the IGF. So, to keep that in these updated, Sudan is going through a war between two forces that have been allied since the year of 2013, and the disagreement came to an end on April 15th, due to differences in the security agreements related to the unification of the armies in Sudan. So, this put the Sudanese in opposition, in a bad position, due to the parties to the war, because the parties of the war are not following the rules of war, in addition to its impact on basic services, including electricity and communication. So, this contributed to widespread manipulation of war narrative and the spread of misinformation, in addition to the intense polarization.
So, to answer your question, in Sudan, right now we have the targeting of telecom workers, Internet shutdowns, we also have disinformation campaigns, and also we have privacy violations. And unfortunately, these practices are used by both sides of war, not only one side, like RSEF or the opposition forced or the armed forces.
So, regarding Internet disruption, Internet disruption is not a new experience for the people in Sudan. Those are used to shut down the Internet during exams and civil unrest. And this time, due to the ongoing conflict, there were numerous Internet disruptions in Khartoum, the capital of Sudan, and other cities. These events are considered as an effort of information control during the war. However, some disruption cases in Khartoum are related to security concerns of the telecom engineers and other telecom‑related workers, as they may face violence because of their movement towards maintenance.
So, the absence of Internet connection opens a wide door to offline information as people cannot verify information that they got from local sources. Moreover, disinformation during the conflict also exists in cyberspace, and it has several actors, but there are two main players here. They are the SAF, Sudanese Armed Forces and RSF, but they are using proxy accounts and influencers on social media platforms to promote and to propagate their narrative regarding the war.
Actually, this practice puts civilians at risk, because getting wrong information may impact their decision to move around their neighborhood or the decision of displacement. Moreover, what I observed is that disinformation is threatening the humanitarian response. So, for example, the ICRC in Sudan posted on Facebook, warning the people not to follow ‑‑ do not follow disinformation.
So, also, during this war, several privacy violation cases happened, such as physical phone inspection, a lot of cases of physical phone inspection by soldiers from both sides, and also the use of spyware. Actually, we couldn't verify the use of spyware until now, but there are claims of that. But the important thing here is we have to mention that RSF imported the spyware of InterLexa, an EU‑based company that is providing intelligence tools. And also, this is not the first time of using spyware in Sudan, as the National Intelligence and Security Service imported the remote control system of the Italian company (?) in 2012.
So, I think that's it from my side, Chantal. Back to you.
>> CHANTAL JORIS: Thank you very much. And thank you also for this account and explaining how disinformation threats can also lead to offline violence and concrete harms to civilians.
So, same question to the people in the room. What have you seen or what have you perceived as being sort of the main, in your experience, the main risks to the free flow of information, be it through surveillance, propaganda, Internet shutdown? What's your perspective?
>> AUDIENCE: Hi. Thank you so much for great presentations. I'm from Access Now, and we are also working on the issue of content governance in times of crisis. And we have been recently mapping a number of prevailing trends in the field that in one way or another put either freedom of expression and fundamental rights in danger, and we looked specifically at this issue from the perspective of international humanitarian law.
And so, we are witnessing several issues, especially parties to the conflicts that are actually very much instigators of those. One of them is, of course, the intentional spread of disinformation as a part of a war tactic, where we noticed a number of cases where we are ‑‑ so we have different case scenarios we are supporting with the case studies, too, that really happened in the field, such as, for instance, claiming or warning that there will be invasion taking place, and in reality, this invasion has never occurred. There is a very specific example from Israel in 2021, where even international media were convinced and believed that this invasion took place and reported on it, which was just the part of the military strategy. There are other examples from regions around the world where we see that.
Another one is, of course, using platforms for the purpose of moving the parts of population from one territory to another, which from the perspective of international humanitarian law is not, at least in the context of non‑international armed conflict, it's not even permitted. So, we see those cases as well.
Of course, the whole entire issue of the content depicting prisoners of war, that was very largely reported, and that can, again, put in danger the privacy of those people's identity and so on, so the safety and security of those individuals depicted on that video content that is being shared. And there may be other two or three case scenarios that we identified in the field and that we are now still gathering case studies, and this will be all summarized in our upcoming report that we are hoping to publish in the following weeks. I don't want to overcomment. But I am happy to elaborate further without going much in, and give space to others as well.
>> CHANTAL JORIS: Thank you very much for the excellent point. Anyone else? Yes, go ahead.
>> AUDIENCE: Thank you for giving me the floor, the opportunity to speak and express myself. I'm from Russia. And what I can say about Internet shutdowns, Internet restrictions in times of conflict, it's pretty obvious that any country involved in the conflict will ensure that there will be some restrictions on Internet websites, media, and so on. But frankly speaking, it is not so restricted as it could be seen from abroad, as long as there are, like, you can't stop information from flowing around, through like telegram messagers, social media and stuff.
And lots of Ukrainian media and Ukrainian telegram channels are still effectively available in Russia, so I can't say it is super restricted environment in Russian media sphere. So far, we face lots of, as the same as the Ukrainian speaker said, we face lots of, like, security threats coming obviously from Ukraine the same way. Like denial of service attacks, like some sophisticated attacks on governmental and non‑governmental, like private web services companies, and we have lots of, like ‑‑ recently, Russian hackers leaked a database for a company that was a service provider for all the airline tickets and airline connections and stuff. So, basically, all the imaginable personal data, including like names, dates, and all of the flying information of Russian people, Russian citizens was published in the Internet, in the telegram, and was available for any malicious actors.
And so far, we see lots of threats and insecurities from disinformation campaigns and fakes, which are used as a weapon in the informational war happening aside a frail war between Russia and Ukraine. And it's said that this kind of informational war and these kinds of weapons and weaponry used in informational war is not described in any international law and is not even somehow imagined and prescribed. Because there is ‑‑ you know, it's ‑‑ the station is like that. There is, say, international law for wars, for real wars and for real warfare, but there is no international laws for informational warfare. And both of the countries and both of the, like, all the citizens of our countries, both Ukraine and Russia, suffer from this Internet warfare. So, the hesitation is like that, that both of the parties use these kinds of weapons in the informational war in between our countries.
For example, for this year, working in, like, a non‑profit organization which focuses on countering disinformation and fakes in Russia, we have found more over 3,000 disinformational narratives threatening Russian Federation and Russian citizens in some different ways. This is about like number of narratives. But separately, we have counted each, like, post and message in social media, and the number of messages and posts and reposts placed in social media, overwhelming 10 million copies in the Russian media sphere.
>> CHANTAL JORIS: Thank you. I think there will be probably quite some disagreement in the room. And also, I will let Tetiana perhaps respond and react to some of the remarks. Certainly, there is a gap in international law as to how to deal appropriately with information manipulation, actually both in times of peace and in times of armed conflicts.
I don't know if we have any ‑‑ yes.
>> TETIANA AVDIEIEVA: Yeah, just a brief response. First of all, I found it particularly interesting, one, the discussion around the incitements to aggression propaganda for war and incitements to hatred turns into the discussion around the disinformation campaign spread inside Russia, which for me is slightly a shifting of the context, because when we are speaking of the aggression issues, per se, we have to take into account the narratives which are primarily aimed at actually instigating the armed conflict, and also narratives which are served inside Russia connected to the ‑‑ for example, inviting people to joining Russian armed forces or connected to actually incitements to commit illegal activities which predominantly are shared in Russian media, especially those that are state‑backed.
Also as regards the digital security threats and digital security concerns, what concerns me the most is the attempt to basically substitute the actual topic of harming civilians and the topic of basically trying to suppress activists, opposition, human rights defenders, and journalists by the fact that there are restrictions which have targeted the entire community in Russia. First and foremost, because among the Russian community itself, there is an extensive support towards the invasion. Even Russian independent media, Medusa in its findings and research stated that from 70% to 80% of Russian citizens actually support the invasion.
When assessing the restrictions in this context, the proportionality analysis, in my opinion, would a little bit differ comparing to the situation when we are just declaring the, like, facts without providing the appropriate context for them. So, I will stop here and I won't probably create the battle out of the discussion here, but I think it's a very important topic to clearly define the things we are talking about and to clearly indicate in which context they're done, to whom they're attributable, and what are the specific consequences of the actions which are taken, and what is the reason behind those actions which are taken. Thank you.
>> CHANTAL JORIS: Hello? Yes. Thank you very much. As mentioned, when we go to the factual scenarios of specific conflicts, for sure, there can be a lot of disagreements as to what specifically the issues are.
I will take one more contribution and then I will ‑‑ then let's hear from Joelle Rizk from the ICRC.
>> AUDIENCE: I'm (?) from InterNews here. This may be a niche issue here, but one of the frustrations we hear from our media and journalist partners, particularly, though also from Civil Society, is around overenforcement from social media platforms and where legitimate news reporting or commentary on conflict is taken down, and legitimate news sources have their accounts suspended or restricted from amplifying or boosting content. Sometimes it's through automation, in cases like Palestine or Afghanistan, where you can't report on the news without mentioning dangerous organizations. We find a lot of media outlets wind up getting their pages restricted. And then other times it's through mass reporting and kind of targeting of those news sources that result in incorrect, you know, having their pages taken down.
Sometimes, people do actually violate the rules of the platform, too, maybe posting pictures of dead bodies and things like that, that do violate the rules, but in a conflict setting, it's often complicated. So, just in terms of the free flow of information, that's another issue.
>> CHANTAL JORIS: Thank you. Yes, absolutely. I mean, also promoting a certain narrative or sharing violations for propaganda purposes, for example, is obviously something very different than reporting on them, to make them publicly known. But given how often automated tools are also multicontent moderation, it's very difficult to make that distinction properly.
Joelle, let me turn to you, and perhaps ask you as well, hearing from the situation in Ukraine and Sudan, does that ‑‑ is that also the sort of threats that you have perceived globally as a humanitarian organization? And what sort of specific risks has the ICRC identified in terms of how these digital threats can harm civilians?
>> Joelle RIZK: Thank you, Chantal, and thank you also for (audio breaking up) interesting contributions. I will maybe focus a little bit more on the harms to civilians (?) rather than on the nature of the threats. Because, of course, our concern is not only about the use of digital technology, but also the lack of access to that, especially to activity, particularly when people need reliable information the most to make especially life‑saving decisions.
We shared the information dimension of conflict ‑‑
>> CHANTAL JORIS: Joelle? I'm sorry, we have a little bit of ‑‑ you're breaking up a little bit. I don't know if there's anything ‑‑ I don't know if it's the connection or if there's anything you can do with the mic that will make it ‑‑
>> Joelle RIZK: Let me change the mic setting. Is it better like that?
>> CHANTAL JORIS: Yes. Yes, much better.
>> Joelle RIZK: I see you nodding. All right, great. Thank you. Sorry, it was a mic setting, I believe. So, I was saying that the information dimension of conflict have also become part of, in a way of digital front lines, because digital platforms are used to amplify a spread of harmful information at a wider scale and speed than we've ever seen before, and that is a concern because it compromises people's safety, their rights, their ability to access also these rights, and their dignity.
And the difficulties that this happens in various ways that are very difficult to prove. Tetiana spoke about retribution a little bit. It is very difficult, indeed, to even, not only to do that, but also to prove how harmful information is actually causing harm to civilians affected by conflict. And I'll try to speak about that little bit.
We see that different actors, whether they are state or non‑state, are leveraging the information space to achieve information advantage, as you had said earlier, Chantal, but also to shape public opinion, shape the dominant narrative, but also to influence people's beliefs, their interests, and their behaviors, which is where in situations of conflict, this really becomes an issue of risk potentially to other civilians.
The information space, in that sense, is an extension of the conflict domain, and it impacts people that are already in a vulnerable situation because they're already affected by conflict. And with the digitalization of communication system, then becomes basically a convergent of the information and digital (?).
That being said, not all harmful information and distorted information, whether it's misinformation, disinformation, malinformation and hateful and offensive speech ‑‑ not all of it is a result of organized information operations, right? Not all of it is state‑sponsored. But the use of digital platforms really has a mix of state and non‑state actors, and to an organized spread of narratives, but there is organic spread of harmful information.
What we've seen in past years, and maybe also to caveat on that ‑‑ that makes it very complex from a humanitarian angle, again, to identify, to detect that that is a harmful narrative, but also to assess what is the harm to that, to the civilians, and then to think of an adequate response, given all of these complexities that I just mentioned.
And what I've seen in the past years is that how countries affected by armed conflict, and in these countries, the spread of misinformation and disinformation and also of hateful and offensive speech can already aggravate tensions and can intensify conflict dynamics, which, of course, will have a very important toll on the civilian population. For example, harmful information can increase pre‑existing social tensions, pre‑existing grievances, can also even take advantage of pre‑existing grievances to escalate social tensions and exacerbate polarization, violence, all the way to a point where disintegration of social cohesion.
Information narratives can also encourage acts of violence against people or encourage violations of humanitarian law, and you already mentioned quite a few examples. The spread of misinformation and disinformation can increase vulnerabilities to those affected by conflict. The distress, the psychological weight it can cause, which is often invisible. For example, think of how harmful information may feed anxiety and fear and also mental suffering of people that are already under significant distress.
We feel that the spread of harmful information can also trigger threats, harassments, which may lead to displacement and addictions and I think a couple of examples were already given in the room.
We also worry about stigmatization and of discrimination. Think of survivors, for example, of sexual violence. Think of families that are thought about as belonging to one or the other group or one or the other, an ethnic group, for example, where they may be stigmatized; about people being denied access to essential services as a result as well, only because they belong to a group that is subject to the information campaign.
We also share that distorted information in times of emergencies and people's ability to access potentially life‑saving information is heavily compromised today. People may not be able to judge what information they can trust, at a time when they really need accurate and timely information for their safety and for their protection. For example, to understand what is happening around them, where danger and risks may be coming from, roads that are open or not safe or occupations of checkpoints, et cetera, and how and where they may find assistance, whether it's medical or other type of assistance, or take measures and make timely decisions to protect themselves or to even search for help.
So, the digital information space can also become a space where behavior that are counter to national humanitarian law may occur, including ‑‑ and I will not give contextual examples ‑‑ including the incitement to targeting of civilians, to killing civilians, making threats of violence that may be considered as terrorizing the civilian population, but also information campaigns, whether they are online or offline. And I would like to underscore online and offline. It can also disrupt and undermine humanitarian operations. Khattab spoke a little bit about that, but I want to say that when this happens, when undermining humanitarian operations may also hinder the ability to provide these humanitarian services to people that are most in need for it, and of course, also compromise safety of humanitarian aid workers.
One last point I'd make on this is that, even the approaches that are adopted to address this phenomena, in themselves ‑‑ and Chantal, you mentioned that in the beginning ‑‑ may also intentionally or not impact people's access to information. It may fuel a crackdown or surveillance or tracking of people, crackdown on freedoms, on media, on journalists, and of course, also on political dissent and potentially on minorities.
As a humanitarian actor, we believe this is an issue that requires a bit of specific attention, not only because of the implication it has on people's lives, their safety, and their dignity, but also because of how complex the environment is. And from that angle, a conflict‑sensitive approach will be necessary.
We are used to discussing a lot the impact of disinformation, for example, from a point of view of public health campaigns, election campaigns, freedom of speech, et cetera. But when it comes to conflict, a conflict‑sensitive approach will be necessary. So, in other words, an approach that really helps us ask how to best assess the potential harm in the information dimension of conflict, and also how that may have impact on civilians that are already affected by several other types of risks, mostly offline. And of course, think of adequate responses that will not cause additional harm or amplify harmful information, whatever the type of that information may be. And I'm happy, of course, to talk a little bit more about that and how it connects to other risks later in the hour. Thank you.
>> CHANTAL JORIS: Thank you very much, Joelle. I do find this point very interesting about, as a freedom of expression organization, we look at something like disinformation obviously through the lens of the human rights framework and the test to apply to restrict freedom of expression, but it's interesting to think about it from the perspective, again, of the potential harm; what are the adequate responses and whether they are the same as the ones we would identify normally as a freedom of expression organization as the adequate responses to disinformation that do not have any unintended negative consequences.
With that, let me move to Elonnai. So, I know that some GNI members are obviously telecommunication and Internet service providers or also hosting platforms. So, I'm just curious to hear, like, what discussions have you had at the GNI specific to conflicts? And perhaps, can you talk a bit about what pressures have companies reported to be facing if they operate in these conflicts, from the conflict parties?
>> ELONNAI HICKOK: Yeah, sure, thanks, Chantal. And thanks for the opportunity to be on this panel. Maybe to start, just to say, GNI is a multistakeholder platform working towards responsible decision‑making in the ICT sector with respect to government mandates for access to user information and removal of content. We bring together companies, civil society, academics, and investors. And all of our members commit to the GNI principles on freedom of expression and privacy, and our company members are assessed against these principles.
In terms of how they are implementing them in their policies, in their processes, in their actions. And we also do a lot of learning work and policy advocacy.
And so, as part of some of our learning work, we started a working group on the laws of armed conflict to examine responsible decision‑making during times of conflict and the challenges that many of our member companies were facing. And then, we are also holding a learning series organized by GNI, ICRC, and SIPRI, which is meant to be an honest conversation around the way that ICT companies can have impact and be impact in the context of armed conflict. And that's really, you know, to say that I'm coming to this conversation as GNI not really being, or not necessarily being an expert in IHL or working in times of armed conflict, but we are trying to bring together the right experts, ask the right questions, and have the conversations that are necessary to help companies and other stakeholders navigate these really complicated situations.
So, I think to answer your question, Chantal, as we've heard from a number of our speakers today, armed conflicts are really complex, and there is a lot at stake. Technology companies may offer services that support critical functions, provide critical information for citizens, but they can also be used to directly or indirectly facilitate violence, spread false information, potentially prolong and exacerbate conflicts, and that's just a few of the potential impacts. There are a number of different risks that companies may need to navigate during times of conflict and they often have to take difficult decisions that require the balancing of a number of stakeholder interests. This includes risks to people, individual users, journalists, vulnerable communities, societies, as well as navigating risks to a company, including its infrastructure, services, and equipment, but probably most importantly their personnel, and especially for telecom companies who have offices on the ground. Often, their personnel are at risk.
And I think companies may need to navigate a whole range of questions about if they operate in a context and what that impact might be. I don't think it's a clear‑cut answer. They, on one hand, may be providing access to critical information; they might be more rights‑respecting alternative; but they also might be used to facilitate the violence.
They have to navigate questions about how they operate and function during times of conflicts, including how they're responding to government demands. These can take many different forms, including requests for access to user information, giving access to networks for surveillance purposes, shutting down the networks, caring messages on networks, removing content, and more.
I think that we've seen that these demands may be informal. The legal basis for the demand may be unclear. The duration of their measure being required may not be specified; for example, it might not be clear when a network shutdown should be ended. The scope of the demand may be extremely broad.
And I think something that was said by another speaker that's important is that these demands can come from both sides of a conflict, not just one government. And so, I think as companies manage risk to people and their company, their ability to respond to government mandates in other ways that might be available to them during times of peace can be really limited. For example, during a time of peace, you could say a company should request clarity of the legality of the request and communicate with the government to determine exact requirements. They should be responding in a way that is minimal ‑‑ refuse to comply, partially comply, or challenge requests through the ordinary channels, disclose information about receiving the request to the public, or notify the user, maintain a grievance mechanism when the privacy and freedom of expression of users is impacted. By complying with the request.
But I think in times of conflict, as they face these different risks that they have to manage, it can be really difficult for them to undertake these measures.
And I think just from discussions that we've heard, things that are useful include companies having risk management frameworks in place, clear escalation channels, clear thresholds to understand what triggers different actions, working with other actors to understand the legality of requests, working with other companies to coordinate actions in a specific context, and importantly, engaging with experts, including to understand the implications of different decisions and ensuring formal and constant review of decisions on how to improve their actions going forward.
And I think another challenge that we've heard in our discussions is that it can also be challenging to understand when to pull back or to de‑escalate different measures that are in place, because it's not always clear when a conflict ends.
>> CHANTAL JORIS: Thank you very much. And I do also really support in these contexts the necessity of a multistakeholder approach, because perhaps say the ICRC may not be an expert classically with content moderation, or maybe not yet. Maybe that's still to come. ISP providers are not necessarily experts in conflict settings. They don't maybe understand ‑‑ both of them maybe don't understand typical threats around disinformation. So I do think it's extremely important that different actors work together.
Let me go back to Tetiana, maybe focus this sort of second half of the discussion a bit more on trying to identify gaps where we need more clarity and also have Tetiana and Khattab speak to the role of ICT companies specifically in the context of their conflicts. Tetiana, over to you.
>> TETIANA AVDIEIEVA: Yeah. Thank you very much. And I particularly liked how the discussion is currently going. What I wanted to briefly follow up and maybe start the discussion around how the ICT companies, how platforms generally have to respond, is that we have to make the clear distinction when organic spread of harmful information turns into the spread of actually illegal content. And probably this line has to be specifically identified for the context of armed conflict, where the effect of the organic harmful information is amplified by the very context in which it is put.
As regards the ICT platforms, for me, since, like in Ukraine, there is no actual mechanism to engage with the platforms on the state level, in terms of we do not have jurisdiction over most of tech giants, and that creates the biggest problem, because there is no opportunity to communicate with the platforms otherwise, except for the voluntary cooperation from their side. That is probably the biggest challenge we have ‑‑ we as international community ‑‑ have to resolve. Because usually states which face armed conflicts or which face civil unrest ‑‑ and we can expand this context even like to other emergency situations ‑‑ they do not have the legal mechanisms to communicate with the platforms, and that is the primary stage for the discussion. We have to understand, one, companies have to respond to the governmental requests, to governmental requests of which governments, the companies have to respond, especially when there is suspicion or when we actually know that the government, for example, is authoritarian one. When the government has and the state generally has the very high index of human rights breaches, whether the companies have to be involved into the discussions with such governments, with such state at all. So, that is the primary point probably we have to think about.
The second thing is, to what extent IHL and IHRL have to collaborate when we are speaking about the activities of the ICT. For example ‑‑ and I can share the link in the chat ‑‑ our organization, Digital Security Lab Ukraine, has done an extensive research on disinformation propaganda for war, international humanitarian law, international criminal law and international human rights law. There is a big discourse about what are the definitions, which legal region is applicable, and how the states, general and international community, have to react when this kind of speech is delivered. With companies, it is even more difficult, just because for them, they're rather ‑‑ I mean, I can absolutely understand why it happens. They're rather waiting for international organizations. For example, the UNESCO, OECD, the Council of Europe, whether there is genocide, whether the threshold is reached or not. And that is actually the point, like is a big plus 100 to multistakeholder collaboration, because there are certain actors which are empowered, which are put in place to call particular legal phenomena by its own name.
We have to understand that, I mean, I wish I could say that there are incitements to genocide in what Russia does in Ukraine, but unfortunately, domestic NGOs won't probably be the most reliable source and the most trustworthy source in this case. So, that's the point in time when international organizations have to step in, I mean, both international, Intergovernmental Organizations, international NGOs who can elaborate on those issues, and that might be a potential solution how ICT companies might deal with the prohibited types of content, the prohibited kinds of behavior, which is usually called coordinated and dissenting behavior online. So, most probably, they need assistance on the more global level, as well as assistance on the local level in order to better understand the context.
For example, when we are speaking about the slur words, most probably it is more reasonable to resort to the assistance of the local coordinators.
And finally, it is about the issue of enforcement. And here, my main point at any discussion is that we are usually trying to ‑‑ unfortunately, we are trying to blame and shame companies which are already good face one. For example, we are constantly pushing NATA to do even more and more and more. And it is nice that NATA is open to discussion. But on the other hand, we have such companies as Telegram, as TikTok, which are more or less reluctant to cooperate, or in case of Telegram, they're actually closed for cooperation with either a government or civil society. And we also have to solve this issue in particular, because there is a big problem of people migrating from the safe spaces which are moderated but have certain gaps in moderation, to the spaces which are absolutely unmoderated, just because people feel overcensored in the moderated spaces, and this overcensorship is often caused by our blaming and shaming strategy. And the very same approach has actually been seen when Meta was blamed by the increased moderation efforts in Ukraine. I mean, it is good that, like, the ICT companies finally starting to do something, and our main task is not to blame and shame them for not doing the same in other regions, but rather to encourage them to apply the very same approach in all the other emergency situations, to develop crisis protocols, to think about the ‑‑ to initiative, basically, discussions about IHL and IHRL perspectives, to say, like publicly, what kind of problems they face, probably to launch the public calls for cooperation when local NGOs can apply, when local NGOs can themselves engage with content moderation teams, policy teams, oversight teams in case the ICT company has any. So, that's my main point, probably, to all the actors involved, that when we see the good behavioral pattern on behalf of the ICT company, we have to encourage them to expand this good behavioral pattern to other contexts, rather than to shame them that they acted in this way only in one situation.
>> CHANTAL JORIS: Thank you very much. And I do echo the calls on companies to take all situations of conflict equally serious and not focus on the ones more that tend to make headlines or where there's bigger geopolitical pressures behind.
So, also over to Khattab. Then I have two last questions for Elonnai and Joelle. If you can keep your interventions relatively short so we have a couple of minutes also for any questions for the audience, that would be appreciated. Khattab, over to you.
>> KHATTAB HAMAD: Thank you, Chantal. And thank you, Tetiana, for the great intervention. So, I will start with the challenges that the ICT companies face during the conflict in Sudan, to be specific. So, the major challenge that the ICT companies are facing in Sudan during the war is electricity, to be honest. Before the war, the National Grid was only providing 40% of the citizens with power. And after the war, it's clear that there was a huge shortage in power supply, and this impacted the network stability. By network, I mean the telecom network, not the power network. And the data centre's availability, which affected the (?) service in Sudan or other basic government services.
However, the ICT companies normalize with the power shortage by equipping the devices stations and data centres with an uninterruptible power supply, and not as UPS and power generators. But due to the circumstances of the war, as I mentioned earlier, the companies could not deliver the fuel to the power generators because of security concerns of the workers. So, this led a company like MTN Sudan, an ISP in Sudan ‑‑ it led MTN to announce that they had service failure due to the disability of delivering the power fuel. And I will translate to the role of social media platforms in the ongoing conflict.
So, social media platforms, actually, they played a major role in ousting the National Congress Party of Sudan, which was ruling Sudan for 30 years, and also it assisted us in our Pro‑Democracy movement. But, however, these platforms are the main tools of opinion manipulation during the ongoing conflict, as both conflict parties are using the platforms, these platforms, to promote their narrative of the war.
However, the new evidence here is that there is a foreign actor, which is playing a major role in the cyberspace in Sudan, which is Meta. Meta took down the official and other related accounts of probably supports persons and they justified that by saying, RSF is considered a dangerous organization according to the militarist website. And yes, I confirmed that RSF is a dangerous organization and we know its human rights record an how it's bad, but this step from Meta contributed to the efforts, like, it indirectly contributed to the efforts of SAF to control information and the narrative of war, as now there is only one way of information. You can get information from SAF, while RSF is suppressed.
My concern is that, yeah, both sides are bad, but like, making a free environment of information, and then people can get the information that they want; they can filter by themselves, not taking decisions that contribute indirectly to prolonging the war and also, like as in the process of polarization. So taking a decision without considering the local context is a big mistake.
I also have another concern. As RSF itself was a part of SAF, as SAF founded RSF in 2013. So, like, it makes sense that both are dangerous organizations. How can you take down one organization and leave the other?
Also, the decision impacted the free flow of information. So, for example, fact‑checkers cannot find information to provide verification to the claims, as there is one way of information, and it also has like security impact on the people on the ground.
So, there are some gaps that I want to raise, and I think it should be filled. So, in this era, the right to access information is related to cyberspace, so the frontliners of accessing the information are the telecom workers, the telecom engineers, and other telecom‑related workers. So, because they are the people who provide and operate the infrastructure which allows us to access information. Those workers should be considered by the international law to be extraordinarily protected like lawyers, journalists, and the human rights defenders.
Moreover, in Sudan, we need more and more training for our people because, unfortunately, we don't have enough human resources to grow our Internet governance, and their knowledge is limited to specific people. And unfortunately, these people are using their knowledge to restrict the free flow of information and freedom of expression.
And also, we have to amend our laws, like the right to access act, the Cyber Crimes Law and the Law of National Security. They were being abused by the same people who have this knowledge. So, I think that's it from my side, Chantal and others. Back to you, Chantal. Thank you.
>> CHANTAL JORIS: Thank you very much. I think, yeah, it's interesting. We've heard now twice of these complications around ICT companies potentially sort of de facto asked to choose sides between the parties to a conflict, also like Elonnai mentioned earlier. And also, I think very interesting point about the key importance of the staff that is in charge of keeping these ICT systems going, and perhaps them needing even specific protections to be able to do that.
Elonnai, so, the GNI does refer to the Guiding Principles on Business and Human Rights, which are key also to the GNI Principles as to how companies should respect human rights. They only make very brief reference to humanitarian law. So, maybe just an open question as to do you feel there is a sense from companies that they need more guidance as to what it means for them to respect humanitarian law, in addition to human rights?
>> ELONNAI HICKOK: I mean, yes, I think that is very central to a number of conversations that happen at GNI. I guess I would say, so, many technology companies approach risk identification and mitigation through the lens of business and human rights, and this includes relying on frameworks such as the OECD Guidelines for Multinational Enterprises, and then the UN Guiding Principles, like you just mentioned. And I wanted to highlight that there are a couple of relevant principles in parts of the commentary of the UNGPs for countries and states with respect to operation in conflict‑affected areas.
Importantly, according to the UNGP as a core principle of the corporate responsibility to respect human rights is that in situations of armed conflict, companies should respect the standards of international humanitarian law. And then also, the UNGP state that when operating in areas of armed conflict, business should conduct enhanced due diligence, resulting from potentially heightened risk and negative human rights impacts. And there's emerging guidance from civil society organizations on how companies can undertake this EHRDD through a conflict lens.
I think IHL can inform tech companies in situations of armed conflict about the risks they might expose themselves, their personnel, as well as other people to. But like you mentioned, I think more guidance is needed as to how due diligence processes can incorporate IHL, as well as more work can be done on the articulation as to what IHL means for ICT companies.
>> CHANTAL JORIS: Thank you very much. Joelle, as the main guardian of IHL, I know the ICRC is looking into some of these also legal and policy challenges that have arisen through these cyber threats. And can you talk a bit about this Global Advisory Board which has supported the ICRC in addressing some of those? Can you perhaps share some of the initial findings?
>> Joelle RIZK: Of course. Would you like me to focus more on ICT companies, since that's where the discussion went?
>> CHANTAL JORIS: Yes, yeah, sure.
>> Joelle RIZK: Okay. So, thanks, it a good question, Chantal. The ICRC has set up sort of a global advisory board, about 2 1/2 years ago, so between 2021 and 2023. We brought together at high level, really, at senior level, advising the president and the leadership of the ICRC on, basically experts from legal, military, policy, tech companies and also security fields to advise on the emerging digital threats and new digital threats and to help us improve our preparedness to engage on these issues, not only with parties in armed conflict but also with new actors that we see play a very important role in conflict situations, including, of course, civil society, but also tech companies.
So, throughout these two years, we've hosted about four different consultations with the Advisory Board, and hopefully, next week, on October 19th, we will publish the list of discussions and recommendations. They're not ICRC recommendations. They won't be ICRC recommendations, but they will be the Advisory Board recommendations on digital effects of disability and its effect on armed conflict.
I will broadly mention the four different trends discussed in these consultations between the Global Advisory Board and the ICRC, and then I will focus a little bit on the recommendations linked to the information space and then to ICT companies. And I will try to be quick because I'm aware of time.
So, the first trend that was discussed between the ICRC and the Global Advisory Board is the harm that cyber operations have on civilians during armed conflict. So, focusing, again, on the emerging behavior of parties during armed conflict in the cyberspace, but also other actors in that space, by disrupting infrastructure, services, and data that may be essential to functioning of society, but also to human safety. And there we consider that there's a real risk that cyber operations will indiscriminately affect or widely use computer systems that are connecting unconnected civilians and civilian infrastructure but in a way that goes beyond conflict. So, as a result, it may interrupt access to essential services, but also hinder the delivery of humanitarian aid and cause, of course, offline harm and injury and even death to civilians.
The other issue or the trend that was discussed is the question of ‑‑ that we are discussing today ‑‑ and that is connectivity and the digitalization of communication systems and the spread of harmful information. And similar to what we already discussed at length in this session, recognising that information operations have always been part and parcel of conflict, but the digitalization of communication systems and platform is amplifying the scale and speed for the spread of harmful information, and that, of course, leading to distortion of facts, influencing people's beliefs and behaviors and raising tensions and all of what we have already discussed, but really stressing that the consequences of this is online as well as offline.
The third issue discussed ‑‑ and this is really an issue that we hold very close to heart as the ICRC ‑‑ and that is the blurring of lines between what is civilian and what is military in the digital dimensions of conflict, and seeing that civilians and civilian infrastructure becoming more targets of attacks in that space, in the digital dimension of conflict.
And of course, this is an issue that is growing concern as digital front lines are really expanding, and they're expanding also, let's say conflict domains. The closer that digital technologies move civilians through hostilities, the greater the risk of harm to them, and the more digital infrastructures or services are shared between civilians and military, the greater the risk of civilian infrastructure being attacked. And of course, as a consequence to that, harm to civilians, but also undermining the very premise for the principle of distinction between civilians and military objectives.
And finally, of course, not by any way the least important, the fourth issue, very important to us as a humanitarian actor and to all humanitarian organizations is the way in which in the cyber domain, cyber operations, data breaches, and also information campaigns are undermining the very trust that people and societies are putting into humanitarian organizations, and as a result, the ability to provide life‑saving services to people.
So, some of the recommendations ‑‑ of course, the board had 25 recommendations. I will, of course, not go through them now, but I would invite you to have a look and read that report that will be launched on October 19th. I think it's really a beginning of an important conversation between multiple stakeholders in that field.
I will maybe speak a little bit on the recommendations in relation to information, to the spread of harmful information, and maybe after listening now to you, I will also add a few on recommendations specific to ICT companies.
So, of course, in addition to recommendations on parties to respect their international legal obligations, but also assess the potential harm that their actions and policies are causing to civilians and taking measures to mitigate or prevent that. This is, of course, a broad recommendation.
But more specifically, a recommendation to states to build resilience, and societies to build resilience against harmful information in ways that uphold the right to freedom of expression, their journalists, and improve the resilience of societies. And by resilience, of course we understand this is a multiple stakeholder approach that also involves civil society and companies alike, so thinking about it as a 360‑degree approach to addressing the information disorder.
Another recommendation to the platforms is recognising the fact that a lot of this misinformation/disinformation is spreading through social media and digital platforms, and we're calling on them to take additional measures to detect signals, analyse sources, analyse methods of distribution, different types of harmful information, intertextual approaches to managing that, and analysing what may exist on their own platforms in this context, but particularly in relation to situations of armed conflict. And I think Khattab's example is a classic example of the importance of contextualizing these policies. And these policies and procedures, including when it comes to contact moderation, as Khattab mentioned, should align with humanitarian law and human rights standards, and Chantal, you also have mentioned.
And of course, lastly on that is a recommendation to us and to humanitarian organizations at large to strive to detect signals of the spread of harmful information, but also assess that impact on people, and that keeping in mind that any responses to harmful information does not or must not amplify harmful information in itself or cause additional or other unintended harm. And of course, a call to contribute to, again, the resilience‑building of affected people in conflict settings.
If I still have a couple of minutes, I'll maybe just mention some of the recommendations to ICT companies that are at large and more linked to cyber domain, and not necessarily to information operations or harmful information. And some of these recommendations include desegmentation of data and communication infrastructure between what is providing military purposes and those that are used by civilians, so segmentation of communication infrastructure, where possible. Also, awareness of risk for companies and awareness of the legal consequences around their role and their action and the support they may provide to military operations and private clients as well, and that awareness of the consequences that their involvement and the use of their products and services in situations of conflict may have. Also, ensuring that restrictive measures that may be taken in situations of conflict ‑‑ sanctions or others ‑‑ related to sanctions or others or self‑limitations as well ‑‑ do not impede the functioning and maintenance of medical services and humanitarian activities, and of course, the flow of essential services to the needs of civilian population.
I'll stop here. Thank you, Chantal, for giving me the opportunity to elaborate on that.
>> CHANTAL JORIS: Thank you very much. I know we're basically out of time, but I do want to, before we get kicked out, see if anyone has something they would like to add, something that you think has been missing from the discussions and should be taken into account by the people working on this or questions, of course, also to the speakers, if they can stick around for five more minutes.
>> AUDIENCE: Yeah, thank you. My name is Julia. I work for German Development Corporation. And I would have one question. Yesterday morning, Maria Ressa said we need more upstream solutions for disinformation topic. And we heard a lot more now about downstream solutions, content management, taking down certain profiles, et cetera, et cetera. So, my question would be, what are your views about questions of design of platforms? So, why do we talk, how do we talk about redesigning algorithms, business models, et cetera, and what your perspectives are on this aspect. Thank you.
>> ELONNAI HICKOK: I would just say I think it's really important that companies start to build in the capacity to ply a conflict lens to the development of their products. And I know that ICRC, for example, is working on building and working with companies to build out this capacity. So, I think we have to consider both upstream and downstream solutions.
>> CHANTAL JORIS: Khattab, Joelle, Tetiana, do you want to comment on this question quickly?
>> Joelle RIZK: I will just say very briefly, it is in line with a 360‑degree approach, of course, that involves not ‑‑ I mean, in the upstream thinking that the very business model is reinforcing in a way the way that these policies can be enforced. So, from that angle, I would tend to, of course, agree. But realistically, I think this would be a very challenging discussion that also requires expertise that may not be in the hands of those that are currently conducting that feedback loop with the tech companies.
>> CHANTAL JORIS: Thank you very much. I would perhaps see if there's any other quick questions in the room? Yes, go ahead.
>> AUDIENCE: Hi. I'll be super quick. Lindsey anderen from BSR. We help companies implement the NGPs and conduct due diligence. I just want to flag a resource that might be useful for those on this topic. About a year ago, we published a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings, which we developed alongside Just Peace Labs, another organization, and it's very detailed, obviously targeted to companies, but it might be useful for those who are advocating with companies, who want to understand under the UNGP specifically what they should be doing and what enhanced human rights due diligence looks like in practice. If you Google BSR conflict‑sensitive due diligence, you'll find that resource.
>> AUDIENCE: Hi, I'm Farsad (?). I'm working on a project related to USAID, and they are looking at human‑centered approaches to digital transformation. And they want to know and understand how it can look like and how they can actually engage with the local communities when they are doing this actual digital transformation work. And one part of that is dealing with crisis. But the challenges that we see in human‑centered approaches and human rights analysis is that, especially in countries that are in war zones, getting in touch with the communities and receiving their feedback and have that kind of, like, stakeholder consultation, is extremely difficult. And I want to know if there are actual recommendations out there? And also, how can we use these mechanisms, these human rights mechanism, human‑centered approaches to not to leave anyone behind? Because we are not talking about Afghanistan anymore. And, like, this is maybe ‑‑ so, thank you so much for this session, because I've been thinking about Sudan and I've been thinking about Afghanistan and how sanctions affect them during crisis. But in this meeting, we need to talk more and more about them so that they won't be forgotten. So, thank you for this session, but also, like the recommendations to get in touch with the community and kind of address their needs as well.
When we are doing the digital developments, and after that, during the crisis, that would be great. Thank you.
>> CHANTAL JORIS: Thank you very much. And I know a lot of material has been mentioned that will come out, and some of them I think also focuses on stakeholder engagement. But I think you're absolutely right, there is still a lot more to be learned and improved. So, I mean, if anyone has anything in this sense to offer? Yes.
>> AUDIENCE: Yeah, thank you for giving me the space, the floor. I want to support Tetiana's words and I think Information Society should do more pressure on global media platforms, because they basically control what people think with their recommendation algorithms. Facebook actually can do a revolution in a click by altering their like news feed in some social accounts like in some country, so that we analyse that and we see that global media platforms are extremely unsupported ‑‑ extremely against publishing their recommendation algorithms. And it was mentioned before that some global media platforms take sides in the informational war happening all across the globe. There's like some base condition because their intent is to be neutral, because like there is no bad and good side; there is like side A and side B in every conflict. And we see that global media platforms tend to take sides, to tend to alter recommendation algorithms for the profit of one of the war sides, but they are not doing it publicly, so they try to shadow it out, so they pretend to be non‑biased and neutral, but they are not. So, I think that the global society and here I support Tetiana 100%, should do more pressure on global media platforms globally. Thank you so much.
>> CHANTAL JORIS: Yes. Thank you very much. And I do think, I mean, there have been longstanding calls around more transparency when it comes to the recommender systems. We've had Digital Services Act just adopted in the EU. Let's see if this will bring improvement. And I know that you have strong views on this as well.
>> AUDIENCE: I mainly wanted to, since a couple of us mentioned several resources. So, together with Article 19, you co‑drafted, and the joint declaration of principles on content governance and content accountability in times of crisis. We did not manage to come up with a shorter title. This is still documented as available on our website. It's a joint effort of a number of civil societies that have either firsthand experience with crisis or, similarly, to Access Now and Article 19 have global expertise in this area. And even though it's a declaration, we managed to put together ten pages of relatively, at some instance, detailed use for platform accountability.
The declaration, why I'm mentioning it, it is specifically addressed to digital platforms that find themselves and operate in the situation of crisis. It has different recommendations for what should be done prior to escalation, during the escalation, and post crisis, emphasizing correctly as the speaker from GNI mentioned, that there is no clear and/or starting point of any crisis. So, there are a couple of detailed reels. It's already one year old, but I think some important principles and rules can be found in there that can serve at least as a guiding light. Thank you.
>> CHANTAL JORIS: Thank you so much. I've been told to close. Also, perhaps to say that Article 19 is also working on two reports, one specific to propaganda for war and how it should be interpreted under the ICCPR and the other one also trying to identify and address some of these gaps that exist when it comes to the digital space and armed conflict. So, as you can tell, a lot more material is coming out, still not enough credit, or it is just the start of a process. So, thank you to our excellent speakers, Joelle, Tetiana, Elonnai, Khattab, it was a pleasure to have you. And thank you to everyone in the room and online who participated. And we'll be speaking about this topic for years to come, for sure.