The following are the outputs of the real-time captioning taken during the Eleventh Annual Meeting of the Internet Governance Forum (IGF) in Jalisco, Mexico, from 5 to 9 December 2016. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
Raw file.
Internet Governance Forum 2016.
Enabling Inclusive and Sustainable Growth.
Jalisco, Mexico.
6 December 2016.
Workshop 160.
Social media and youth radicalization in the digital age.
12:00.
>> Hi, everybody. Good afternoon. We are about to start the session. For the people on the second row, stand up, we have this microphone for the questions and the people on the table, please press the button, so you can talk. And please wait for the microphone, because we are streaming online, and we need for quality reasons that you speak on the phone, in the microphone. Yes? So, you can wait for the microphone, and then you speak, please.
It's all for me. So we can start in two minutes now. Thank you very much.
>> Hello, and a very good morning to all of you. Thank you for packing the house like this. We feel important, because all of you are here.
But today we are going to deal with an extremely important topic. Let me contextualize, the moderator, so I don't want to dive into the speaking time of our illustrious speakers.
Social media, radicalization, the youth, is a topic of great importance these days. We at UNESCO in June 2015 organized the first conference on Internet youth and radicalization at a time when people thought the topic was taboo. It was too sensitive. Indeed it is still sensitive, because the definition itself of radicalization are very diverse, they differ from country to country, context to context. That conference gave birth to framework action plan. Following that as you know, the United Nations Secretary‑General presented a U.N. action plan for preventing violent extremism and radicalization, last December. And then of course, things have taken a turn for the better. More and more people are discussing it. Issues are very complex. We have to ensure that we protect freedom of expression, human rights, while at the same time minimize the risks of wild extremism and radicalization on the Internet especially targeted to youth.
Today we have a extremely high level panel, very diverse too. And we hope that we will be able to look a little deeper into the issues that we face, vis‑a‑vis social media, and radicalization. First speaker is going to be Mr. Guy Berger, my colleague at UNESCO. He is responsible for the organize's global work on safety of journalist, gender in media, general education. He does than need a introduction. He was a illustrious journalist before in South Africa. As well as a Professor. Let me give the floor now to Guy.
>> GUY BERGER: Thank you very much, Indrajit. Could I have the Power Point being screened, please? Some of the people want to come and sit at the back here? Because it might be more comfortable. Up to you.
Good morning. I'm going to cover quite a complex topic and it will take me about 12 minutes, I think, but it will tell you where UNESCO is coming from on the question and some of the research that we have been doing.
Let me jump in straightaway. You may have seen this headline yesterday in the newspaper, this was in the financial Times, this shows what a hot topic this is. The headline is slightly misleading but it is interesting, if you were to follow this particular story.
To start with over here, I think it's quite evident to everybody that nobody is born radical. When we say whatever persuasion of course the word radical has come to mean a particular thing, in recent times. But even then it should not be attached to particular religion or particular political outlook, or necessary to particular forms of action.
We need to be careful about the assuming that radicalization means a fundamentalist Islamic terrorist combination.
Radicalization refers to a process which involves apparently convincing people to reach a point of certain violent action against citizens. That violent action may or may not be called terrorism. These concepts as I'll discuss are interpreted differently by different people. But it is about identity and action. It is not like identity gives rise the action or action is only the sequel to identity. In this radicalization process these two things would be clearly working hand in hand.
To come to the point of the focus today which is about social media, we are looking at the question of media and the mediaitization of radicalization processes. But of course we would know even before we start that we should not take a media centric approach. In fact we have to always keep this holistic perspective which I'll come back to shortly.
To start with the most important foundations particularly because UNESCO is part of the UN, it's a norm that people should have the freedom to seek and receive and express and impart information. Limitations should always be exceptional, and that balance has always got to be kept in mind.
In terms of limiting any speech in the name of combating radicalization, it's got to meet three tests. Otherwise it's not legitimate. These are international law tests under article 19 of the ICCPR.
There is also a category of speech which is outside of freedom of expression, which is not only seen as legitimate to prohibit but in some cases is required to prohibit. Article 20 says propaganda for war and incitement for advocacy of hatred, that is incitement to one of those three things. You can see these are quite broad terms and quite difficult to apply to particular acts of speech. However, that gives us the kind of fundamental human rights framework. The norm freedom limitations are exceptional and limitations themselves have to meet certain qualifications.
If limitations don't meet these, we begin to violate the fundamental right of freedom of expression. Let's look at another dimension of this, which is when you come to this question of media and radicalization, and people tend to blur these and of course they are not completely distinct but let's acknowledge that you can more or less define at least in ideal type news media, social media and entertainment media.
News media we know is primarily dealing in the realm of knowledge and information, and to work it's got to be credible. It has to be reliable. Social media we know is very much about communication, more about communication than information although of course they are not completely separate. It works on attitudes. One could say entertainment media works on emotions.
This is a ideal type in a way, but it helps us understand if we are going to talk about social media, where social media fits in primarily in terms of the total media landscape.
Having said that, we need to make a further distinction about what impact is there on people, in terms of this knowledge, attitudes, and emotions. There is a formula called cap knowledge attitude and practices which is when you are speaking about media be it social media or news media, to what extent it impacts on these three dimensions of humans. Of course not so linear, it is not like you know that smoking is bad for you, you feel guilty at smoking and then you stop smoking. It doesn't work as if it's a causal thing in all cases. Exactly the same with radicalization.
There can be content that is, that reaches people's heads, content that reaches people's hearts that doesn't translate into action. At the same time knowledge can disrupt attitudes sometimes. We know that news information can supply that kind of knowledge into social media but it can also go into reverse.
Social media cannot only play a role in terms of impacting on how people think and behave, but also on the actual logistics of organizing.
That is my graphic. We have to keep in mind how complex this whole thing is. We are looking at different kinds of media and different media functions and the way that they impact on people in this way.
Finally, to complicate it further, in terms of scientific literature, you have to distinguish between these different kinds of impacts on people. The strong media is this here, that different media, strong media, it's easy to grasp, short term emotions of anger, arousal, whatever, media weak effects, short term because audiences select their depth and often they reinforce, they use media to reinforce what they know or believe.
Then you have indirect effects which relate to media but I convey it to my colleague here, multi step flows, multiple media. Gender setting, then you have the deep effects, which is about identity, roles, subjectivities, the frames in which things are constructed.
This is all very complicated, because all these can combine in terms of radicalization processes, all in terms of country radicalization processes. But what this means is that media becomes a site of contestation around this question of radicalization.
In terms of the responses to radicalization, one could categorize responses as three fold. Protection is responsible, you block or remove content to protect those who you regard as vulnerable. The other one, is to empower the receivers, MIL is media and information literacy. It is a big program of UNESCO. We empower people to be able to discriminate when they engage with content or when they produce or receive content.
Third response is counter speech. The fourth response is to strengthen the role of journalism, so knowledge can rule the day. We are after all supposed to be in the age of rationality and reason. So knowledge should be something that guides people's actions, at least ideally.
These are broad things that I'm going to talk about. I'm going to come shortly to the research that we have been doing in these areas and what we found about this.
To sum up so you don't remember you have got this very complicated issue of radicalization which is working different areas and different responses and is important to get knowledge about all of these different dimensions.
We have done a little bit of research at UNESCO, and you will find some of these brochures around here. This is looking at some of these responses, which this is called combating online hate speech. It is a interesting policy brochure based on a book. There is another publication which Rebecca McKinnon herself did for UNESCO, and with colleagues like Alon Bar, and that is the role of intermediaries.
The idea of this project is to build skills, knowledge and values, young people to resist radicalization, have their own self‑esteem and confidence, to help fight for peace and defeat extremism she is focusing on how one can intervene to build identity where young people are not creations of the way they are interpreted or they are appealed to become consumers or particular versions of men or particular versions of women or particular versions of religion, or young people can actually make their own identities drawing on the combinations of offerings that they get from the engagement with life and with the media.
This particular project that I won't go into now but it is aimed at understanding both the understanding with young people about radicalization, counter radicalization and capacitating them. That is part of the bigger framework within which UNESCO is working and doing research on these questions. Here is new research being commissioned, led by French academics primarily but these are the people. We regret that divina couldn't make it to present this research. So I'm presenting this on her behalf. These are some of the goals of this research. We said please map the assumed roles because there is a lot of assumption that the media and social media are indeed the cause of radicalization, and we said this is a assumption. Can you map this. Secondly, we said can you look at social media and its role in the communications and information ecosystem.
In terms of how it reacts with news media and entertainment media, as you know the whole entertainment media genre has been crafted into the radicalization literature. We said to this team of Divina Frau‑Meigs, look to the challenges of protection of freedom of expression when there are responses like protectionist responses and the counter speech responses. We said please discuss the stakeholders, governments, academics, researchers, IT people, companies and so on, what recommendations can we make to them to deal with this radicalization issue, bearing in mind that actions can often have unintended consequences.
Very quickly, this is how these researchers went about, they developed various words to try and begin to do this research. They looked at Google scholars since 2012, they also used these key words over here, they tried to find these. They used academic databases. They looked at academic research from 2001 to 2016. If you look at the bottom of this slide, they also looked at the gray literature. They have looked at about a hundred organizations and think tanks that provided information on this question.
We went through a review of the scientific understanding of this whole process. I won't go into this, but this is interesting. They discovered different approaches to these questions, some from political science, we look at the bottom analysis, some are sociological.
What did they find, this is the interesting part. They found there is a lot of gray literature. The academic literature is lagging behind because there is always a time gap this. Radicalization came to the fore front in the past few years. Academic literature takes a year to be published let alone to do researching. They found there is little theirization. People are using extremism, terrorism, radicalization without defining them. There is little empirical research, and the definitions is what I said. Little theorization in terms of processes.
There is no scientific evidence of clear causal connections between what happens in social media and the radicalization process. There is no evidence‑based result that they could generalize from on this.
The best they could say is that they found the Internet facilitates rather than drives processes of radicalization. They found that social media cannot be separated, particularly from off line processes. I'll go into more about this. Counter measures, they said there is no clear evidence that censorship actually reduces radicalization.
They identified a trend to counter narratives that are challenging these, this is a growing trend. They also looked at educational responses, and they found that actually these are doing quite interesting work, but there is very little documentation, and again very little assessment about the educational responses in terms of how efficacious they are.
They also found research gaps, they found that there is information on hate speech incitement to discrimination, violence and hatred but there is less on privacy and participation. Little focus on gender issues, lack of research from Africa and the Middle East and North Africa, although these are the most affected areas.
The draft recommendations, and I'll begin to wind up now, this is quite interesting, what our researchers have come up with. They have general recommendations, they said one needs global dialogue to battle rights on the Internet. Rights of security and rights to freedom of expression, rights to privacy need to be balanced and particularly because on the Internet, they propose to use this UNESCO model ROAM which is rights, openness, accessibility and multistakeholder methodologies. They said any particular policy, law, practice that is dealing with country radicalization should be looked at in terms of how it balances rights but also the impact on openness, accessibility and if it's done on a multistakeholder basis.
They call for new e Rapporteurys about global citizenship as a identity that will maximize commonality and reduce the us and them. Gender actions to counter patriarchy. A lot of the discourse is patriarchal and constructs masculinity as a aggressive normality. Media, they were advocating there should be games and engagement with young people. There should be more critique of the news media for producing stereotypes about Islamic terrorists, about whatever other kind of stereotype. For taking fear mongering from political or security industry people and regurgitating it without any qualification, and creating a media panic.
They call for involving journalism schools and critical glossaries, how terms are being used in this contestation. The recommendation for social media companies, social media companies should be much more assertive in terms of the legality of their actions, in terms of tracking, disclosing, sharing identity. They should be more specific about what they mean when they talk about these terms. They should give guidelines that could help people recognize problematic content. There should be a clear statement that unpopular or controversial or divergent views do not amount to hate speech per se depending how you define hate speech. And put humans behind moderation, and human oversight of algorithms.
My last slide is the conclusion. It is clear what this research is pointing us to. You have got to have a holistic response, you can't only have this protective response of blocking filtering. It is not only also as regards social media. Secondly all responses have to be human rights orientated and I mentioned that recommendation. Thirdly more resources are needed in this area.
Lastly, this is the most important, that if you are speaking about policy in action for radicalization, knowledge should be in the driving seat, because there are so many other factors, determining this discourse and the actions which are, things like global politics, national politics, populism. Fear mongering. There is assumptions that A causes B when in fact these media effects are more complicated. There is the role of the security industry which is broadly conceived here but essentially people, as would you understand, most interest groups will advance their own interests. It is clear that many people in the security industry are promoting security responses to this particular problem of radicalization, that can be at the expense of other kinds of actions.
That is in a nutshell the latest research that UNESCO is doing on this. We hope it will be published early next year. Please watch our Web site. I look forward to hearing what our panel has to say in relation to some of these points. Thank you.
(applause).
>> INDRAJIT BANERJEE: Thank you, Guy. Unfortunately, I can't praise your presentation too much because your colleague, but I thought you did an excellent job of breaking down how we absolutely must look in this whole question of international in terms of promoting radicalization very carefully. There is a lot of excitement about it, there is knee jerk reactions, in that everybody thinks that the Internet is solely responsible for radicalizing young people. For the benefit of the audience, please, we did a conference recently in Quebec, which was focusing on Internet use and radicalization. And please go online and check the call of Quebec which was released by the Prime Minister and Foreign Minister of Quebec which summarizes what different stakeholders, different people hold as a view. I think Mr. Guy Berger broke it down very well in terms of understanding cause and effect, different variables and the need to be very cautious when we jump the gun and believe that the Internet is responsible for all the ills of society and especially social media.
So thank you very much, Guy. Our next speaker is Sophia, coordinator of the Portuguese safer Internet center, with a crucial role in Internet policy, developing awareness raising resources for safer Internet, among other initiatives. You have the floor.
>> Hi, everybody. In my presentation, I will give you ‑‑ sorry ‑‑ I will give you an introduction about the no hate speech movement and our national level, Portugal is managing this campaign. For this I bring you some words of the Portuguese national committee. As you may know, the no hate speech movement is a Council of Europe campaign to address and combat hate speech engaging young people to speak up for human rights and democracy on line and to reduce the acceptance of hate speech by reporting and analyzing it. The no hate speech movement is run by the Council of Europe's youth sector since 2012. It aims to combat online racism and discrimination, by engaging young people and youth organizations, to recognize and act against human rights violations. The project is based on youth participation and encouragement. The new phase of the campaign that will end in 2017 is part of the Council of Europe action plan on the fight against violent extremism and radicalization leading to terrorism. In addition the campaign contributes to the action plan for building inclusive societies and the Council of Europe strategy on Internet Governance, which advocates for an open inclusive, safe and enabling online environment.
The campaign is designed to promote freedom of expression online, by providing a safe space for people to express themselves free from fear of hate speech. The campaign seeks to decrease the levels of acceptance of hate speech online and off‑line. It combats hate speech in all forms, including those that most affect young people, such as cyber bullying and cyber hate.
The campaign is based on human rights education, youth participation, and media literacy. The campaign mainly operates on a online platform, where anyone can join and share resources and experience. It also hosts the hate speech watch, a online tool for reporting, monitoring and education on hate speech. The campaign also realized on off line activities such as training courses, seminars, conference, youth events, festivals and flash mobs.
The campaign highlights the importance of involving school communities as well as nonformal education, and youth workers. It is composed of national campaigns in over 40 countries, which together with European partners and online activates work to implement campaign objectives and priorities for 2016 and 2017.
Very quickly, the campaign objectives are to support human rights education activities, for action against hate speech and the risks it posed to democracy and the well‑being of young people, develop and disseminate tools and mechanisms for reporting hate speech, special online including at national level, mobilize national and European partners to prevent and counter hate speech and intolerance online and off‑line.
And at last promote media literacy, and digital citizenship, and support youth participation in Internet Governance. The campaign in Portugal is being implemented both online and off‑line based on some strong moments, such as online activist training, seminars and exchange of information, dissemination to the general public, following the European dynamics to various other initiatives related in particular to the European action days.
For the new phase of the campaign, the national campaign committee in Portugal decided to focus on the improvement of communications and making better use of online tools to give more visibility to the campaign. In addition, the national campaign committee will develop more awareness‑raising actions for young people and the population in general, as well as conducting training sections on human rights education and digital citizenship for educators, teachers and youth workers.
The Portuguese campaign also supports the campaign to both online and off‑line involvement in activities such as European action days, such as countering sexist hate speech, international Roma day, international day against homophobia, and international youth day. Other activities include the promotion of the no hate speech movement campaign, at fairs, exhibitions all over the country, music festivals, seminars and others. Production of materials and tools in Portuguese to support the campaign's actions including videos, translation into Portuguese and publication of the Council of Europe's guide for human rights, for Internet users, then the translation into Portuguese of the council of European publication bookmarks and manual for combating hate speech online to human rights education.
The Portuguese version of the bookmarks is going to be launched next week, and in addition we will have a three days training session. Thank you so much.
>> INDRAJIT BANERJEE: Thank you very much, Sophia. I think your presentation actually highlights the fact that how important this IGF is in terms of multistakeholder partnership and participation, the amount of things you are doing on the ground, that is what is going to make a difference in the long term. We can keep speaking at high level fora but congratulations on the excellent work you are doing. I hope we can get back to it and discuss some of the things you are doing and what are the implications, the impact, what have you seen in terms of accomplishments on the ground. Our next speaker is Mr. Will Hudson from Google. He is a very advisor for international policy at Google and focus on Internet Governance and other international policy issues. Before joining Google, he was director for international cyber policy at the National Security Council, where he was responsible for coordinating the Government's implementation of technology policies, including those relating to Internet Governance and freedom, human rights issues, associated with data privacy and online surveillance and cyber capacity‑building. He also served in a variety of positions in the federal government, advising clients on legal and policy issues associated with emerging technologies. It will be interesting to see Mr. Hudson's perspectives given the fact that he has been both on the Government side and now on the Google side. Mr. Hudson, the floor is yours.
>> Thanks, can everyone hear me? I'm terrible with mics, I apologize in advance. UNESCO's leadership on this difficult issue is appreciated, as is everyone else here around the table and hand hanging off the rafters in the back and in the back on this. I'll talk about Google's approach to this problem and the way that we are trying to go beyond the problem, and be part of the solution.
I'm going to try not to talk too long, because I'd rather hear from you guys and deal with it that way.
The majority of people that use Google's platforms do it for legitimate and lawful purposes. There is unfortunately as we know, it's why we are here today, there is a minority out there that are exploiting them and using them to spread hate and incite violence. First and foremost, I think we say this a lot but it's important to keep saying, that which is that we are concerned about this problem too. We care about it. We are troubled by violence. We are working with governments, with NGOs, to understand the issues. That is why studies like this are helpful, to figure out how you can tackle the problem and move towards a solution. You need to have a good understanding of what the problem is.
So we can figure out how to help. Second, we are committed to responsibly addressing the problem. Google and I'm proud to say is committed to access to information. We think it's critical that people can come together on platforms like YouTube to understand what is happening in the world, build communities, find a voice. At the same time, it's not and never has been an anything goes environment on YouTube. We have community guidelines which are there for everyone to look at. They state clearly the content that is prohibited on the platform. We can go into specifics later if you would like.
But they are clearly spelled out. We had cooperate with governments pursuant to valid legal process, as they are conducting their investigations. That is publicly available. You can look at the forms to see what governments, how governments would be requesting that type of information.
More generally, we are simply always working on trying to get better at this. This is a evolving target. The problem is changing. People are changing. Something that worked decently well a year ago may not work at all now. We have to be committed to never feeling satisfied and always feeling like there is more to do.
We will see, and I think Rebecca, you are going to talk about it later too, I'm sure it will come up in questions but an example of this is what are new things we could be doing, is the recent announcement we made with several other companies, today about the sharing of digital fingerprints or hashing, the most extreme and egregious content on our platforms.
Third, the most important thing is it's not just about the problem. It is about investing in solutions. It's why and this is what I want to focus the bulk of my remarks on, our counter speech efforts, which are one way we are finding that technology, for all the Internet, for all the sort of nastiness that the Internet can bring out in some people, it is also a amazing platform to build up communities, to be vehicles for true solidarity and understanding.
With that, first and foremost, we are strongly of the belief that technology can and must be part of a force for good. I've already talked about the policies we have for removing content. But at the end of the day, to be blunt, we don't think that taking down content gets us very far and that send censorship is not and never is the answer. People talk about what is going on online and online radicalization. But the reality is that online content is a reflection of what is happening in the bricks and mortars world.
While we recognize we can't do it alone, we want to be part of the solution of addressing some of those underlying issues, which again have a digital component but are manifestations of what is going on in the real world. One of the more promising strategies is counter speech. It is what it sounds like, about making sure that content that promotes truth and understanding eventually drowns out the content that is promoting violence, promoting hate or promoting fear.
We want for people that are in the community to be able to find hopeful messages on line, not just harmful and scary ones.
With that, we can go into specifics but in September of this year, we announced a program called creators for change which is a global counter speech campaign at the first ever YouTube social summit for social change in London, which was a event that brought together 150 NGOs, YouTube creators and other change makers which is a term I can't believe we use, but change makers, to kick off the initiative.
As part of this we have committed equipment and production grants, YouTube creators can make more content. We announced a $2 million Google.org charitable fund called global voices, which gives grants to nonprofit working on efforts to counter hate. We are most importantly investing in local programs in several countries to activate local networks and build up communities that can tackle some of this stuff.
We have got other projects which we can talk about too. But the fundamental message here is there are some that doubt, what is the value of a YouTube video, right? What is the value of a good YouTube video versus a bad YouTube video. There is data showing that for, this is a panel about youth, right, that for young people, for millennials, 50 percent of them say that they have watched a video on YouTube that's changed their life in a meaningful way.
We can say it's videos and of course it's videos and that is only part of the solution. But this is an impactful way to reach some of the most at risk young people that we are at risk of losing to hatred and violence. We can talk more about the specifics. I'd rather have you guide the conversation.
The last point, Google is one company, there are many companies and other stakeholders involved in this, and so we want to find ways that we can collaborate and work with those other folks to figure out all the things that we don't know, because we don't have all the answers. We don't understand fully the problem, which is why this research is helpful. There is a program, Rebecca will talk about it, because I see pamphlets there, there is a multistakeholder initiative called the global network initiative, a group of stakeholders from different sectors that are trying to bring people together to discuss best practices, to come up with recommendations, to figure out how in this evolving landscape we can continue to try and not just be playing catch up but one day get ahead of the game.
With that, I'm going to pass the floor and I'm happy to take questions later. Thanks so much for letting me be here.
(applause).
>> INDRAJIT BANERJEE: Thank you, Mr. Hudson, for that very different perspective and for being so concise and brief. I think you have a very challenging position, that while you create the problems, you also have to find solutions. You don't create problems, but (chuckles) you have a platform on which people put up whatever they want, and so it's a enviable position, challenging and difficult, I can imagine. Like you said, every day new challenges emerge. Nobody can anticipate them. Radicalization is one of those major challenges which are emerging. Thank you very much. I hope we have some time for questions and answers later on, because these are extremely interesting presentations. And the presenting a diversity of perspectives. Our next speaker is Barbora Bukovska, article 19 senior director for law and policy since 2009. She leads the development of all article 19 policies and provides legal oversight and support legal work across the organization. She has extensive experience working with various organizations, on a range of human rights issues, including protection from discrimination, access to justice, deprivation of liberty, reproductive rights and community development. The floor is yours.
>> BARBORA BUKOVSKA: Thank you very much. Thank you, UNESCO for inviting article 19 to this interesting event.
I want to make a few comments on behalf of article 19, which is a freedom of expression organization. I have comments on the UNESCO study and presentation from our perspective.
I want to make three points in this respect. The first I want to start with discussing this concept of radicalization and Guy mentioned it at the very beginning, as a concept which is very undefined. But we really need to start this discussion with looking at the core of what we are discussing. If the concept is so poorly defined and unclear it's open to human rights abuses and human rights violations. Guy is one person who was once investigated by South African authorities as dangerous radical, because he was advocating for racial equality in South Africa, which was at the time against no social norms and probably very few people here in this room would defend law enforcement tactics which were used against him, because contrary to his era he was radical but he was also right. Radicals and radical ideas take many different forms. But what is also considered aggressive but sometimes also addresses the real problems which are in the society. Then also the verdict of history rarely tracks with the present view. We need to start with the point that the society is, often they require radical change and advocating for such radical change is not often contemplated with violence and terrorism. We see this often and frequently today because the term radicalization is indeed in this day and age used not only to describe violent groups but also civil rights advocates and NGOs, social movements and religious institutions. As I noted at the beginning this concept is not new and has been used through years. We need to learn from this episodes, and to learn that increasing governmental secrecy and restrictions will only undermine all the efforts which are being led in this area.
However, I want to note that the term radicalization is often, it is not helped when it's used as one fit all form, to group together or to merge together various different groups, ideologies and also blurring distinctions between many nation states even or many different terrorist groups or even merging ISIS with Al‑Qaeda or even with Kurdish groups in Middle East because the fact that these groups are often in war with each other, doesn't help to analyze and take useful solutions to the problem, and it also artificially increases or inflates the scope of activity then which then distorts what is actually happening and what solutions we can propose.
That is my first point.
The second point is the measures which are applied to fight this radicalization, however identified. I'm glad to see in UNESCO presentation that the blocking and filtering of content is and censorship of the content is highlighted as one of the noneffective tools in this area, or not recommended as a solution, because this is actually what article 19, we see in our work, we see that not only this blocking and filtering measures are often done in a very arbitrary manner, but they are also very ineffective and the risk of overblocking is large. We oppose these measures on principle level but since freedom of expression can be limited under some strict circumstances we also issued a policy document which we are launching here at the IGF which looks at under which circumstances blocking and filtering can be compatible with international standards on freedom of expression. I encourage everybody to check this. I won't discuss it in detail since we don't have much time.
This would be the key recommendations in terms of censorship.
The second that I really appreciate that UNESCO highlighted the range of positive policy measures, various counter speech or various, the engagement methods, as a much more effective tool to fight the underlying social causes here. However, we also need to look and this is what article 19 always recommends to look at the underlying causes that are leading to the social phenomenon, and in this particular case we need to mention the fact that one of the richest soils for emergence of violent groups is also the military conflict, Civil War, many such groups emerge from such conflicts. We need to look at what are the causes and how we can fight them.
Importantly also it's really crucial that we, as freedom of expression organization, we advocate that one of the key and very effective measures how to, the strategies we need to adopt is also provide very objective and evidence driven information about the nature and scopes of threats we are facing in this time, and also to show that programs, also take decisions subsequently, that the programs that are not showing clear results, and the impact which they are going to bring should be scrubbed and should be abolished and instead the resources should be diverted to finding the situations where we have the evidence of wrongdoing. I'm looking forward to read the final version of this UNESCO study once it is issued.
(applause).
>> INDRAJIT BANERJEE: Thank you very much for that very concise presentation. The point you make is absolutely essential, which is that one has to be clear in terms of definition of radicalization, because in law there is a very basic principle, there is something not clearly defined, then there is no right or wrong and it cannot be debated in a court of law. Definitions are very crucial. I would like to bring your attention to a report that was done independently, it's not a UNESCO report, but it was a independent report on the regulatory frameworks, you can find it online, which exist in terms of dealing with issues of radicalization and introductory chapter deals extensively with the question of definition of radicalization. These are issues which we must confront, discuss instead of burying them under the carpet because that doesn't help. Our final speaker is miss Rebecca MacKinnon, director of ranking digital rights projects and new American foundation open technology institute and they are developing a methodology to rank Internet, telecommunications and ICT sector companies on free expression and privacy criteria. She is also a author, researcher, Internet freedom advocate and cofounder of the citizen media network global voices on line. The floor is yours.
>> Thank you very much. Great comments. Thank you, Guy, for previewing the research. It is one very important finding, is that there is no evidence that the censorship is actually having an impact on radicalization.
I'd like to start by picking up on some things that were mentioned by our colleague from article 19, about concerns in the way that the censorship is taking place. In my hat as cofounder of global voices which is a international citizen media network, and we have very active communities in the Middle East and North Africa and Sub‑Saharan Africa and all over the world, but also in countries that have been a particular focus on this issue, a lot of members of our community who are activists, journalists and bloggers are getting content taken down and accounts deactivated, because they mentioned these issues, not because they are actually involved with these activities.
There is a lot of overcensorship that is happening, and some kind of one of the more ridiculous examples of the overcensorship is the late last year Facebook deactivated the accounts of a unknown number of women named ISIS, and the ones we found out about were the ones that got reported in the media, and made enough of a stink that they got their accounts reinstated. But so that is one example of the clumsy overcensorship that happens. But there have been a number of prominent cases of well‑known journalists who had content censored on social media because they were reporting on extremist terrorist activities in different places.
This is one of the big concerns that comes around the increased pressure on companies to do something and companies are then trying to do things, and then there is collateral damage that is not part of the intent.
In our ranking digital rights research, we released our first index last year, assessing 16 companies including the major Internet platforms. We found while companies are being somewhat transparent about Government demands that they receive in through formal channels in terms of what is being taken down according to terms of service, which is primarily the way in which a lot of this type of content is being dealt with, because its legality or illlegality is unclear, there is no transparenty around that, no data being reported about the amount, the numbers and natures of things that are being taken down. It is unclear whether companies are doing any kind of human rights impact assessment around the way in which their terms of service are constructed and enforced which is a big problem.
There is also poor mechanisms for remedy. Within the global voices community and other communities, people whose accounts are deactivated one could argue not in keeping with the intent of the terms of service, often have difficulty getting their content reinstated, unless they have friends who work for article 19 or human rights watch or something, who can then get in touch with the right people at the companies.
We need clear grievance and remedy mechanisms that can serve as a check on people who are becoming collateral damage victims of this pressure on companies.
Secondly, I'd like to point as our colleague from Google already mentioned the global network initiative has released some guidelines related to, for primarily governments but also dealing with the perspective on companies on concerns about how governments are approaching, countering extremist content on line and the need for legal, demands to be through legal channels, not informally. There needs to be clear due process, there needs to be transparency, etcetera.
You can go to their Web site and look at the full set of guidelines. I'd also point to the NGO access now has come out with a set of recommendations for companies and governments around these issues, and they overlap a lot with what's been said today in terms of the need for dialogue and education and transparency, respect for users' privacy, and also one principle, avoid coercion of private industry to undermine free expression principles.
I would advance the, suggest the hypothesis here that the hashing system that the companies have announced is due to pretty strong pressure from governments that if they don't do something, then much worse things will happen to them.
And maybe our colleague might not be willing to confirm that, but maybe he won't deny it completely.
But the, which brings us to this new system, that has been reported in the news. There is a number of questions, it's not been actually set up yet. It is just announcement of the intent to build the system. There is no clarity right now on how it's going to be decided, what images get put into the system. How exactly the people who, the automated system will flag content but then human beings will review and make decisions. Is there going to be any independent oversight over the process of determining what images go into this system, and how decisions are being made?
What are the appeals and remedy mechanisms? I would suggest that companies that are being part of this system need to work much harder at remedy mechanisms that really work, to address the inevitable collateral damage that is going to result, even despite best of intentions.
Which brings me to a final point, that there was discussion of whose fault is all of this, is it the Internet's fault or is it some, you know, the fault of actors on the ground in actual countries. I would point out that Civil Society in a lot of the countries where radicalization is the greatest concern, Civil Society is under attack. Feminists, religious minorities, people who are trying to connect with global Civil Society to promote visions of society, promote reforms, promote social justice, advocate or just report the truth, independent journalists around the world, the committee to protect journalists is reporting record numbers of journalists being killed, journalists are being jailed around the world, in record numbers. The fact that independent media and Civil Society voices who would be creating the content that might help to counter these radical voices, they are not online as much as we would like, because they are getting killed and being put in jail or being threatened and silenced, in many cases, by their own Government.
This is why ultimately, freedom of expression, protection for human rights, protection for the rights of Civil Society to organize and advocate, protection of journalists to report independently on what is going on, including corruption by their own governments and social justice problems, is absolutely vital to this question of extremism because otherwise, all you have is on one side the Government voices, and on the other side the extremist voices. And the extremists also are attacking the same Civil Society groups and killing them and so on. So they are being squeezed both by the Government and by extremists. In some societies just extremist voices and Government and Civil Society is increasingly small and weakened and they become collateral damage by overblocking and removals in social media. They are being squeezed from all ends.
I would make real appeal that if we really want to deal with extremism, human rights, not just online, online, but also off‑line, legal protections for Civil Society, for independent journalism, is the key, actually much more important to solving the problem in the long run, than Google and Facebook setting up more mechanisms to police content online. That is a Band‑Aid over the real illness. Thank you.
(applause).
>> INDRAJIT BANERJEE: That was an excellent way to end this panel presentation, Rebecca. But I think of complicated our lives by taking us back to the bone of contention, which is what restrictions to freedom of expression and it's very clear, I greatly appreciated your use of the term collateral damage. That is something which is to be avoided at all cost. The fundamental principle should be protection of human rights and freedom of expression, and the minor, I would say, fringes of radicalization should be dealt with, without ever thinking that that is mainstream. That's the core of the content. That is a real challenge, because as several of the presentations highlighted, by trying to deal with minor elements, I mean let's be honest, I'm going to say it bluntly, there are three billion people on the Internet just as in the real world there are crooks on line too. Let's not delude ourselves, there are good people in real life and there are not so good people. The same thing will happen on the Internet.
But at the same time, just for a minority of fringe elements, I think trying to suppress freedom of speech and expression is not the solution. The solution lies elsewhere. The solution part of Guy's presentation dealt with is media information literacy, greater awareness, advocacy, basic regulatory legal protections in place. I think that can take care of the matter.
But complex issue, so Rebecca, I think you have brought us back. We have a good 20 minutes, so I think it will be very interesting to take some questions from the floor. Wow. Okay. Keep your questions brief. And introduce yourself, please. We will respond to as many as possible.
>> Should I use this? Does that work? Okay. My name is Gabriela, I'm also from South Africa. I wanted to talk about another form of response by governments to hate speech, which is of course criminalization of hate speech. South Africa is currently advocating for a criminalization of hate speech bill. Germany obviously has some forms of criminalization. But I'd be interested to hear about criminalization in an online context as well.
>> INDRAJIT BANERJEE: We will take ‑‑ yes, please go ahead. Get a mic.
>> I work for a think tank in UK called demos. The question is simple. I don't think I heard a word of support for censorship in that entire debate. That is obviously what is happening most at the moment in terms of ‑‑ Twitter is playing whack a mole shutting down over 100,000 ISIS accounts regularly. I wanted to ask each panelist, do they, are they recommending that we stop any kind of blocking, any kind of take down, at all. None. Absolutely none.
>> Hi. I'm Gaea, I work with APC. I wanted to address the elephant in the room, because the word religion was not mentioned very clearly anywhere. That is something we simply have to address. It is one of the issues around which radicalization pretty much mobilizes. Might not be the only issue but certainly is an issue around which radicalization mobilizes. One of the key points there is really the application of blasphemy laws and laws relating to hurting religious sentiments being applied to online spaces. If you look at the case of bloggers in Bangladesh there is no ability now in the society to count the narrative of extreme religious views, so my question is can we really be looking at radicalization without looking at how laws are being applied in the online spaces to curb religious expression. And in the name of countering hate speech, what is happening in states in Asia especially, is there is genuine trampling of legitimate speech which touches on religion. It might be uncomfortable and unpopular but it is speech. One of the studies we have done in APC is looking at nine countries across Asia to see how freedom of expression and religion intersect. There was some points made by Guy that I wanted to question about balancing rights, because the former Special Rapporteur on religion refutes that notion of balancing rights because the rights exist together and they should be protected.
I'm going to leave fliers here and we will launch the report, feel free to come by and share your views as well.
>> INDRAJIT BANERJEE: Okay. Too many questions. Very important questions, first one, criminalizing hate speech, second one, should there be no censorship at all. Third question, on issues of religious extremism and our approach to it.
I'll give as your South African colleague asked the question about criminalizing hate speech, Guy, you go first.
>> GUY BERGER: Any limitations must be necessary and proportional. You have to ask is criminalization necessary and proportional response. The court of people rights says criminalization is possible for speech like defamatory speech but imprisonment is not a proportionate response. You have to apply this criteria to this case.
Questions of censorship, it depends how you use the word censorship. People would agree the takedown of certain content, whether it's child abuse or genuine defamatory, but as you spoke about whack a mole with this particular thing, which begs the question about whether that is an appropriate response as opposed to a distraction, in regard to this kind of content, and then a quick response to the balancing of rights, well, it's always a question of how do you establish the integrity of a right when it's up against another right, and then again you have to look at the necessity and proportionality between these, and because the Internet, you have to look at does it have a impact for openness and accessibility and is a multistakeholder participation.
But rights, whether you are speaking about the right to privacy versus the right to information and transparency, whether you are speaking about the right to security versus right to freedom of expression, all these rights can calls come into contestation with each other. You have got to, you need a methodology to deal with that.
>> Cedric, some remote questions? Oh, you're the guy.
>> My name is Victor, from the youth IGF program from Brazil, from the university of San Paolo. I'd like to ask about the influence of filter bubbles in hate speech. I'll try to keep it short. For those who doesn't know, filter bubbles are developed by a writer from America. He says based on data collection, try to keep the user at the platform by personalizing content. The content that the user is exposed is based on previous interactions of the user. Everything they click, write, read, search, is stored and treated by a algorithm that choose the content that will be exposed to that user. This creates a bubble where we are only exposed to the stuff that probably we like and are isolate by things that we probably will not like. When we move this to the political realm, we have echo chambers that the opinions that I expose on Internet will be only come from the, by the network that I'm sharing, because that is what the allege rhythm will be determined. Adding one more thing, I read research at MIT and university of Beijing. They show hate is the most virallized emotion of the Internet, influential emotion of the Internet.
The thing that I'm trying to frame here is that we are living in filter bubbles, and these filter bubbles have probably more hate speech than every other content. I would like to know the opinion of the table about filter bubbles and this stuff. Thank you. Sorry for the long ‑‑
>> Still on? Okay, great.
>> Let's get remote.
>> Anybody going to read it out? Or they speak by themselves?
>> Microphone?
>> Filter bubbles, okay. Thanks for the question. This is a concern that you hear a lot. It is a valid concern, particularly given all the issues we have talked about today, which you helpfully drew a connection to. One, at the outset, it's always been the case that information has been targeted where specific kinds of users are mined, but speaking just for Google's part, I can't talk for others, but for us, if you go to search and you enter a query for something, and information that's returned that is not relevant to you, then we failed. Right? The question is how do you return a relevant result?
For us, we have long found that for search, the query itself, what the person types into the search bar is actually a more reliable indicator of what a relevant result is, than any data we have about the user themselves.
For example, when you search for today's news, signed into one account versus today's news signed into another account, in general our experience is you don't get dramatically different results.
That said, sorry, I lost my place ‑‑ that said, the question is, what data do companies have about users, how is that information being used, for what purposes and whatever, is I think a issue that transparency has to be a central component of it.
Again only talking for Google's part, I don't know what the other companies are doing exactly, for us you can go in and go into your my account as a logged in user and see the pieces of information Google has about you, how it was acquired, what it is being used for and should you like to delete that piece of information, delete all the information or take out that information and go somewhere else. It's a imperfect solution. I want to get to other questions. We can talk off line if you would like.
But transparency has to be part of the solution. And give the users a feeling, of a actual degree of control.
>> INDRAJIT BANERJEE: Thank you. There were three questions, this side. You go first. Okay.
>> More of a comment than a question. I work with UNICEF office of research. We do research with children and parents on their use of the Internet. We look at the information, literacy, civic engagement. We find out a lot of children actually can't discern what news online are true and what are not.
Information literacy is key to it. But also, talking about and listening to the debate today, it resembles in a way the debate we have had for years around child safety online, when to block, when to not block, what is grooming, very similar approaches to grooming in terms of radicalization and grooming for sexual solicitation.
My question here is, who is the most vulnerable? We know that a lot of kids, are we talking about children under the age of 18, you are talking about youth, but we know that young people encounter content and risks online but they don't get harmed by those risks because they are not necessarily vulnerable. Vulnerabilities exists off line, and in a similar way those people who encounter people who are trying to groom them for the radicalization or for terrorism and other purposes, have specific vulnerabilities.
How is UNESCO addressing these issues? I've heard in your research you are looking at knowledge, attitudes and practices. But that missing link is, why certain people act upon this and some don't.
>> Yes, thank you. I work with a U.N. human rights office. I wanted to congratulate all of you. Your comments were really all spot on. We also share as an office the concerns you express about the lack of definitions and it's not only actually about the lack of definition of radicalization. It's if you want to take it further, what is violent extremism it has never been defined. Not even terrorism has a accepted definition. You can guess it if you compile a number of conventions. But there is no agreed definition.
The second point that I want to make is of course we shouldn't stop taking measures to prevent hate speech. There are some kinds of speech that may be prohibited, some kind of speech if they incite violence discrimination, they must be prohibited. I'm sure that is out of the question.
The problem is that what we are seeing is that some of these metrics are overly broad and repressive and counterproductive because they end up feeding a repressive environment that feeds into the problem. It is not only because we need to respect human rights because it's the right thing to do and because it's the norm which it is, but also because it makes sense or otherwise you become part of the problem. Thank you.
>> Thank you very much. This has been a very interesting enlightening panel with a lot of good comments from the audience as well. I'm glad to be here. My name is Alan, I'm here with privacy fundamentals, we are a new organization focused on the development and implementation of best practices as to privacy and security, with a focus on transparency and accountability, all stuff we have been hearing about here.
Our initial focus is on the education and school sectors. I'm especially interested in the tensions between privacy and security and speech interests.
There is a free speech majority in this room, I believe, but I'm aware that governments respond to a broader base of different perspectives, and they try to move the ball in their own directions. It can be troublesome for companies to cooperate with governments, who decides what is good speech, what is bad speech, how do we balance that and protect journalism, various other speechers. This is stuff Rebecca mentioned in the context of avoiding collateral damage, but what about the privacy of the alleged hate speakers, the risks of mis‑identifying, whether children have a special right of protection. How do you define that right, how do you limit it? If they do, do schools have a special responsibility to limit certain access to speak or to receive speech?
Those are the questions, and it comes down to what is going to work? Can we gather data to give us answers about what might work? I'd suggest maybe Google is in a better position than anybody to gather that sort of data.
>> I like the presentation very much. Thank you. I'm Raymond from Hong Kong. I run a investments program, studying Government politics. I want to ask one question. Things that everyone will focus on young radicalization but from my experience in Hong Kong not only the young but also different ages, genders, other people will radicalize. Is there any study showing the relation, on the effect on the old and young but not only social media and young radicalization, because it has been shown maybe censorship or social media doesn't have a organization but from my experience, it's young and old. The relation is very important.
>> INDRAJIT BANERJEE: One last question before we go to responses.
>> Thank you very much. What do you suggest to do with terrorism incitement, when you don't have enough time to go to the court, and ask the judge what to do? Who is supposed to detect it, and who is supposed to do anything with it?
>> INDRAJIT BANERJEE: I'll take the lead and respond to that question quickly.
I think on that, people are pretty clear, incitement to terrorism is dealt at a national level generally. There are laws in every country on incitement to violent acts, terrorism especially. The intelligence, every national intelligence is working on it. I come from India. I know we have a lot of people who are inciting terrorist activities. They are taken to a court of law. If found guilty, they are punished. That is where we stand today.
I think it's still again there are gray areas, because how do you define, like the gentleman from the United Nations asked, how do you define what is incitement of terrorism, what is incitement of violence. Boundaries are rather blurred. But I think governments are dealing with these issues, as is possible, currently, before any international norms, because terrorism is also a crossborder, are established.
That is where we stand today, as far as that question comes from international legislation and regulation. Now .... (pause) I thank the audience for excellent questions which are straddling the whole discussion.
>> I wanted to make one comment and recommendation to UNESCO. A lot of discussion which was focusing on online measures to fight radicalization or extremism, but what is the proportion of the effectiveness of online measures with off‑line measures? For example what we see in the UK is there is a increase of the lack of funding to youth clubs, to stuff which you do offline, and then the focus is up to historically made sense of filter everything on line and also this funding also for the online allegedly useful tools. It might be good for UNESCO in your study if there is still a scope to look at the effectiveness and also how is the funding decided here.
But I want to respond to one of the colleagues from the youth movement, who said that he wants to fight the hate which is most realized emotion on line. If we take the example two types how you can fight virus, one is isolate the audience and let those who are infected by virus die or keep them in prison all the time but this is not very effective. You immunize the audience or inoculate the audience, and that is what I think UNESCO is doing with those studies and with recommendation of the positive measures, because they are much more effective than censorship in general.
>> To respond to the question about do we think no censorship at all is the solution, definitely, there need to be some rules, right? People otherwise would, there are cases that have happened with people trying to use social media to, in one village, to organize, an attack against another village. There is definitely, it's right that there are rules. I certainly in my project with ranking digital rights, we are not rewarding the company that removes the least content or blocks the least things, the highest score. That is not how our methodology works.
What we are looking for is transparency and accountability. What are your rules, you need to be clear about what they are, and why you have them. There needs to be clear appeals processes if people think that they have been unfairly enforced, that they need to, the rules need to be created in consultation with stakeholders, including human rights groups and including a range of user communities to ensure, basically do human rights impact assessment on how your rules are getting put together and how the process is, for enforcement is put together. There needs to be transparency also about particularly who is making the requests, who is making the demands to take down content.
So everyone sort of in that ecosystem around content being removed, how rules are created, who is enforcing them, under whose authority, and what the redress mechanisms are, whether they are legal remedies or remedies that the companies provide and so on, needs to create a system by which the creation of rules and the enforcement of rules is accountable. Right? I mean because I don't think any of us are advocating for a state of nature online in which basically life is nasty, brutish and short for anyone who is a minority or a young person or female.
So we do, yes, if we want everybody's rights to be respected and protected, there do need to be rules. But just as with off‑line, there are unaccountable and unjust rules, and there are unaccountable and unjust enforcement processes, and it's never perfect. But you strive towards accountability, transparency and justice. We need to have the same thing in the online environment, and it's complicated, because these environments are run by the sovereigns of cyberspace, rather than by governments in a lot of cases and they interact with a patchwork of governments who all have different demands and different human rights standards or different levels of respect for human rights standards.
So we really need to figure out how we can jointly work to create an online environment in which rules are set, and enforced, in a way that is accountable and transparent and just. And we are a long way from being there.
>> Thank you. To respond to two points. I think context is such a critical thing. The point about vulnerability is critical, I hope our researchers are looking into what does literature say about context and how context impacts vulnerability.
I think though that there is in the same way that all children ultimately have to learn to cross the road by themselves and not be protected, everybody needs to develop a minimum level of competence to recognize attempts to shape their own identity, and to become master of their own identity. I think that applies to vulnerable and stronger people alike.
The context most important to the question of what do you do when there is a incitement to imminent violence and I would refer you or others to the rebut plan of action by the UN human rights office of the high commission of human rights council because the plan of action is useful in terms of giving a granular set of criteria about when speech should be legitimately limited, because not every context is shouting fire in a crowded theater going to lead to actual actions. But the rebut plan of action is a useful guide. Thank you.
>> INDRAJIT BANERJEE: Thank you. Let me request all of you give a big round of applause to our excellent panelists, and to yourselves, because ‑‑
(applause).
I think your presentations were also equally interesting, insightful. Thank you very much for being here with us. I guess it's time for lunch. Thank you for staying with us all through this time. Although all of you at least myself am famished so thanks a lot and have a good IGF.