The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> DAVID WRIGHT: Okay. Let's have cozy conversations. Okay. Can you hear me? Yes. Okay. Excellent. Thank you. I can hear myself now. Good morning. Or indeed probably good afternoon or good evening from everyone that's joining us today.
In this particular workshop that we are having and looking at entitled bridging the gaps. AI and Ethics in Combating NCII Abuse. It is around non‑consensual intimate image abuse. It is a subject that we're going to be exploring over the course of this panel.
I'm David Wright. I am CEO of a UK charity, SWGfL. My colleagues will explain some of the things that we do. We will clearly cover some of those gaps. I'm joined in terms of the panel conversation by a number of very e esteemed guests and panelist. I'm going to introduce those to you just to start with. We've got a series of questions that we will be asking for the panelist.
First introduce Nighat Dad. Nighat is the founder of the Digital Rights Foundation and member of the Meta Oversight Board as well as part of the UN Secretary General AI High‑level Advisory Board. If
I next turn to Karuna Nain joining us online. She has years of Internet safety, policy, governance affairs, and communications. She consults with tech companies and non‑profits on their strategy policies and technologies to make the Internet safer. Karuna was global safety policy at Facebook Meta where she spent nearly a decade working on child online safety and well‑being. Women safety and suicide prevention. At Meta she partnered with SWGfL to help victims of NCII Abuse in India's first 24/7 news channel, New Delhi television, and broader. She's a graduate from the University of Delhi. Welcome to Karuna.
Also joined by Deepali Liberhan. She's been with Meta for over a decade. She works on policies, tools, partnerships, and regulation across core safety issues.
Also joined by one of my colleagues, Sophie Mortimer online. From the UK where it is rather early. Thank you, Sophie. Sophie is manager of the revenge porn helpline and content service at SWGfL. She coordinates a team to support adults in the UK who have been sharing images without consent and online abuse and harms. She works with NGOs around the world to support their understanding of stop NCII and the help it can give victims and survivors in the communities. They share learning and best practices to ensure that stop NCII is a tool that works for everyone where they are.
Finally, if I turn to my far right, there's an export in the field of online safety and serves as the head of engagements and partnerships at SWGfL, the UK‑based charity, Boris Radanovic. He works with UK safe Internet centre. Boris has worked with European countries, including Croatian who worked at the safe Internet centre who has been with Belarus and Serbia to government officials and NGOs. His focus is on protecting children from online threats and scams as well as empowering professionals through workshops and keynote speaks.
One of the key contributions today includes learning and leading online safety education efforts where they emphasis the evolving risk in the digital world such as grooming and intimate image abuse. They reflect the commitment to helping prevent non‑consensual sharing of intimate images. Introductions complete. So what we have got and just going to invite all of the panelist just to give us a couple of minutes introduction and then we've got a series of structured questions that we'll open to each of the panelist.
Then, to everyone in the room here and also to those of you online as well as. We will be having a, you know, a really in‑depth conversation about this. Based on some of the conclusions from what you now can understand is ‑‑ yeah. A very esteemed panel in the particular subject. If I can just, Nighat, over to you. Just a two‑minute introduction into this. Thank you.
>> NIGHAT DAD: Many of us have been working on non‑consensual imagery. Not on the issue, but addressing and looking into solutions.
Of course, the helpline in the UK and at digit Digital Rights Foundation in Pakistan, we collaborate on this together as well. In 2016, we started this. The idea was to address online harms that young women and girls face in the country like Pakistan. There's so many cultural, contextual nuances that many of times platforms are unable to capture that. That was the main reason that why we started the helpline. Not only to address the complaints by young women and girls in the country and also to give a clear picture to the platforms that how they can actually look in to their products or mechanisms, reporting mechanisms or remedies they are providing to different users around the world. I think I'll just say one thing and stop there.
Over the years, we have seen that online harms or in this case a woman or death facilitated gender base. Non‑consensual intimate imagery around the world has very different consequences and different jurisdictions.
In many parts of the world, it kind of limits itself to the online spaces, but in some jurisdictions, it turns in to offline harm against especially marginalised groups like young women and girls.
In the last couple of years, I think the very concerning thing is how AI tools are easily accessible to bad actors. They are making deepfakes and synthetic histories of women not only just normal users, but also women in the public spaces. They find those deepfakes is a challenge. Not only people who have been working on the issue, you look at larger public who have no idea how to verify this. They believe what they see online. This is the challenge that we all are facing at the moment. I'll stop here.
>> DAVID WRIGHT: Thank you very much. Yes. We'll get in to the subject without any doubt. I'm next going to throw it to Karuna who is joining us online. Just a couple of minute introduction. Thank you.
>> KARUNA NAIN: Thank you so much. Nighat. Boris. Sophie. It is great to be on the panel. I want to give a shout out for you to organisation the discussion on the topic. I don't think we've done enough or had enough dialogue as to how the power of artificial intelligence can be used to prevent some of the distribution of intimate imagery or deter perpetrators to stop them in their tracks. Even educate and also to prevent this kind of harm being perpetrated online. And lastly from also to support victims.
We've heard time and time again how absolutely debilitating it can be that the images are going to be shared online or they have been. You've just come to know. There's so much we can do to support people in the moment to give them resources to let them know what tools they have in their hands. It is going to be paralysing to know this is happening to you. Kudos to you for organising the important discussion.
I'm looking forward to hearing what comes out of the workshop and the kind of ideas generated. They leverage the power of artificial intelligence and harm being perpetrated online.
>> DAVID WRIGHT: Thank you very much. You are very kind. Let's try to harness the power about this rather than necessary than some of the challenges that we always see as well. Thank you very much. Next we're going to turn to Deepali.
>> DEEPALI LIBERHAN: Thank you.
That was accurately put as usual. The approach to safety over the years has been multipronged. We think about a couple of things when we're thinking about safety. Do we have the right policies in place? What is okay and not okay to share in the platform. We have a tools and features to give users choice and control over what they are seeing and customize their experience.
And the third most importantly is that we have partnerships where we work with exports over the years to be able to address some of the work that we've been seeing. I just want to step back a little bit. Talk about how, you know, currently stop NCII came to be. Meta heard loud and clear from the experts and users that NCII is a huge issue. It was actually one of the ‑‑ you know, one of the people that was working on this. We were able to actually move beyond just being able to address this issue at a company level on our platforms. Address it at a cross industry level. I think that's really a genuine place for industry and Civil Society to come together to address some of these harms in a very scalable way. Something that's important and non‑consensual imagery.
We've also to come together and try to understand as has been updated, what is the ways that you can use the technology to actually help victims or provide education or provide resources. We do that currently on the platforms.
For example, if you look at something like NCII or let me give you an example or suicide and self‑injury. We're able to use proactive technology to identify people who have posted content which, you know, could contain societal content or content referring to eating disorders. To catch the content and send them resources as well as connect them with local helplines. That's such an important way that we can use technology to make sure that people who need the help are able to get it. And same, there are not quick solutions. But it takes time to have discussions work together.
It is a combination of technology and the, you know, the advice of experts who are actually working on this issue to come up with solutions both to prevent their harm, to address their harm, and to provide resources and support to victims. I know I took the long way to this. I just wanted to provide some context.
>> DAVID WRIGHT: Thank you very much. Next I'm going to throw it, Sophie, to you in terms of a two‑minute introduction. Thank you.
>> SOPHIE MORTIMER: Thank you, everyone.
I think we need to provide support with caution. There's gains to be support for content at scale and speed. However, it is also important to remember the victims and survivors can be abused with these tools. They may not want to engage with them while seeking support. Trust is understandably degraded.
In fact, we have previously worked at developing the AI support tool. The risk were not outweighed by the benefits. Not at this time. We couldn't be sure that the technology can safeguard people in the time of need adequately enough. I hope this will change. There's huge potential here. We can revisit the concepts. It is imperative that we have trust in the security of such a tool. It prioritises the safety and well‑being of users.
>> DAVID WRIGHT: thank you, Sophie. Boris.
>> BORIS RADANOVIC: Thank you. Thank you for organising it.
It is morning. If I'm going to call on anything in my introduction is that we all, especially the policy and coverage sector need to wake up to the benefits and potential threats of AI. If we know anything in the last couple of decades, for online safety and protection of children and adults is that modalities of harm are changing rather rapidly.
Speaking about the application of AI or the benefits of AI, we are missing and I'm really glad that this is on the last day of IGF. I hope this conversation will continue. We are missing governance and structure and frameworks coming from and being supported by, yes, the industry, yes, the NGO, and nation states across the world. If I can jump off a point, we need a broader conversation of this. Of understanding, yes, the potential of threats to it. But as well emphasises the benefits and how it can be utilised to better protect and align with the policies.
I would agree with my dear colleague, Sophie, from the Revenge Porn Help, currently the threats do outweigh the benefits. We need to make sure that advocating for proper use of tools such as stop NCII dot org and other ways of stopping problems by AI and with AI or at least with the support is going to be imperative. The possible support coming out of the technology capabilities of AI is tremendous. We need to reign that in and understand it much, much better than what we do now.
>> DAVID WRIGHT: Okay. Boris, thank you.
We are also joined from a moderation perspective and colleague Niels is also managing the online aspects of this. So those of you joining us online, if you are asking any questions, Niels, you will need a microphone though to articulate those at some stage. We have one. Okay. By way of diving in to this particular issue that you've heard a brief introduction.
In terms of specific questions, as we get down into the aspects of AI. Particularly in context of non‑consensual intimate image abuse, I'm first going to turn to Nighat. Nighat is the director at Digital Rights Foundation, and member of the Digital Oversight Board. Your advocacy places you at the forefront of this debate. How should AI systems for NCII detection being adapted ethically to fit varying, legal, and cultural context.
>> NIGHAT DAD: I think Sophie touched on that. Yes, we can use AI systems to our benefit as well. I mean harness them in terms of speedy giving a speedy remedies to the victims and survivors of tech facilitated gender‑based violence.
At same time, I think in our context, we have to be extra careful and cautious. AI systems need to solve for cultural demands. We know that current models are trained on English and other western context and languages. I'm hopeful that while we're having the conversations, these conversations will lead to new generation of AI that will better understand cultural and linguistic demands. I understand that sitting at the UN Secretary General AI High‑level Advisory Board, we've had the conversations in the last one year. We brought the global majority perspective from different angles. Not just the conversation around AI are only and haven't ‑‑ only happening in the global nod.
By some global nod countries, the global majority countries are not really part of the conversations. Until and unless you are not part of the conversations, you don't know how to address different issues while using AI technologies. Or being aware of the threats and risks of these technologies. I think these conversations are happening a different spaces. I'm glad that we are talking about this as different helplines and those who were addressing NCII. I think it is also important that we cannot ‑‑ we understand that we can't solely rely on AI to combat NCII. Platforms, social media platforms still need to commit to human moderators and human review. They need to create easy pathways for users to escalate this content when automation missing it.
So that's three things that comes to my mind. Broader training for AI, continued human oversight, and user friendly reporting mechanisms. I would also like to see transparency and constant audits of AI. We can see how well the automated content systems are performing. And transparency should be granted to Civil Society.
So, that there are opportunities for third‑party reviews of how these models platform. I would just like to plug our white paper that we released from the oversight board which is around content moderation in the era of AI. It sort of draws or own experiences for the last four years while I delve in to cases that we have decided and looking in to so many cases related to gender and debt facilitated gender‑based violence that users have faced on Meta platforms. We looked into the tools. We looked into the guidelines and policies of Meta and give them really good recommendations.
Also the white paper is not just for Meta platform. It is for all platforms who are using AI to combat harassment on the platforms. There's so many recommendations that we have given. One of them is basically constant audits of their AI tools on the platforms they are using. Also giving access to third parties, like researchers. In terms of, like, what kind of feedback they can give to the Meta. Meta has a leverage. They have a very good initiative as trusted partners initiative. They can leverage that sort of ecosystem in terms of getting feedback and also providing them support who are already addressing gender‑based violence.
>> DAVID WRIGHT: Thank you very much, Nighat. Some really great points there. I'm really struck too by the point about the westernised data and extensive in terms of training models is a really good point.
Also, I want to recognise that global leadership that you provide in the space and have done for so many years. It is great to have you here. What an opportunity for everybody to ask questions. Thank you very much. Okay. Just as I sort myself out.
Next, I'm going to turn Karuna to you in terms of question. Now as both a trustee of ours and very important trustee of ours. Thank you very much. Obviously, a key advocate as well for Stop NCII having been the one with the original idea. Certainly the responsibility. It was a great force in Stop NCII dot org. What role do you see in protection efforts? What is essential to support victims without compromising user autonomy? Karuna?
>> KARUNA NAIN: Thank you, David.
It is a two‑part question. Both are really, really important questions. The one thing we were just following up from what Nighat was saying. I think there's not been enough transparency from the tech industry, unfortunately, as how they are leveraging AI. They are not sharing enough of how they are using AI to get ahead of the sharing of intimate images on the platform. They talked about how they were able to leverage the part of AI in one way. I'm not sure they will attach on the data.
The work of AI and they are not aware of images being shared. They are using AI in the spaces to be able to proactively identify if the image or video is potentially non‑consensually shared and bump it up to reviewers and taking it down if it is NCII. That's a really great example of how the technology can be used to get ahead of the harm.
Many times we work from the onerous and burden on them for reporting and trying to check if the content has been shared online is painful. I talked about in the earlier opening statement as well. There are extreme ways particularly I think that companies could be leveraging the power of AI to get ahead of the harm prevention. If there are signals that we have on the platforms.
If someone is, you know, for example, updated the relationship status to say that they've recently been through a breakup or expressed any kind of trauma which could mean they have intimate images. They might want to send through Stop NCII or nip the harm in its bud. If someone is trying to impede, can they stop them and tell it is illegal in countries. Can we stop them in the tracks and not allow to be shared in the first place? Supporting victims.
If someone is searching for NCII related resources on the search engine or platform, could you bump up something to them at that point. They tell them the services or the support options exist. Helplines exist and many victims don't know. This is the first time they are hearing of this abuse when they are experiencing. Both and all three actually, Sophie, Nighat, and Boris, raised very important points about thinking through some risks and loopholes.
A few things I would love to list down. Thing that is we learned when we were building Stop NCII dot org and working with Sophie and other helplines around the world on what is it that organisations need to keep in mind when they are building out the technologies to supporting victims? Keeping victims at the center of the design. You are giving them agency, empowering them, and not making any decisions on behalf of them.
Two, no shaming or victim blaming. They are under enough pressure and enough stress. This is not their mistake that intimate images are being shared. That's on the perpetrator. Trust is not a bad thing. It is the perpetrator that has broken the trust. They need to feel ashamed. Not the person in the intimate images. I talked about bias. Making sure that any technology that is developed is not taking ‑‑ is taking in to account other instances. I'm not sure that AI is at stage right now. It needs more training and it needs more support to be able to hundred percent accurately defining content. Recognising the biases is an important part of it.
Also, accountability and the transparency. If the companies are using the technologies, I'm hoping they are. If the non‑profits are thinking about how they can use AI in the space and being transparent and being accountable, and ways for people to report and Nighat talked about how important reporting still is even the scenarios. Giving people the ability to reach out the service or platform is really important.
Of course, I'm harping on prevention. If there's ways that this technology can be used to prevent the harm, I think a lot more work should be done over there. Once the harm has happened, it is great. There's a lot of things that I've thrown out.
>> DAVID WRIGHT: Thank you very much, Karuna. Perhaps we've made an assumption. I've not really introduced Stop NCII dot org. Can I ask you to do that? Just to explain briefly what it is.
>> KARUNA NAIN: Please step in. It is your baby. I'm talking about it. The goal is to support people to really stop the abuse in its track. The way it works is if you think you have intimate images which you are worried which will be worried you are used without your consent on anyone of the participants platforms, you the platform to create hashes or digital fingerprints and share the hashing with the participants platforms.
If anyone tries to upload the photo or video on the participating platforms, they can get an early signal. They can send it to their reviewers or use the technology to determine whether it violates their policies or not. And stop that content from being shared on their services. It is very much an intention tool. If you are worried it is going to be shared, in addition, you can use that to stop that abuse in its tracks. I don't know if I missed anything. If you want to add anything on to that.
>> SOPHIE MORTIMER: The hashes are created on somebody's own device. They don't have to send the images to anyone. That's empowering and huge step forward in the use of technology. That puts victims and survivors right at heart of the motions.
>> KARUNA NAIN: It is the preserving the way that the web site has been built. Just taking the hashes from the victims, minimal data is asked of the victims. We know this is a harrowing experience. We don't want to stop them from using the service in any way. I think that's also very important as we are talking about ethics around building of any of this AI technology.
Making sure whatever data is collected its minimal and proportionate to what is needed to run the services. Not collecting and using the data for anything than what you are collecting it for and telling people and also not using the data without their consent for anything. It is really, really important. Privacy and data should be at center of the design of any AI technology that's built in the space.
>> DAVID WRIGHT: Thank you both. Amazing explanation from the two people leading this. Thank you.
Next we're going to come to Deepali. Director at Meta of Safety and Policy. With your expertise on safety, can you talk about how Meta is thinking about responsible development of AI? Can you give examples of how Meta is thinking about safety, AI, and the challenges ahead?
>> DEEPALI LIBERHAN: Thanks, David. Everyone has done a great job. I've been with Meta for about a decade. When Karuna joined as well. When we talked about safety and when we talked about the community standards and community standards are essentially rules which we say clearly what is okay and what is not okay to post on the platforms, including NCII. We should encourage user reporting. We were dependent on the signal to have the content violating and rely more on the human review and have that content reviewed to take action. We've taken the majority of the content. We know a lot of people will see content and not report it.
Maybe they will feel that, you know, they may feel like their peers reject them for reporting. It doesn't remove the need for human reviewers. It makes it easier. We are able to do it at scale. We public in our community standards, we're able to remove a majority of the content that violates the community standards before it is reported to us. We're also trying to, you know, ‑‑ we're also trying to work on understanding how our large language models can essentially help us and two ways where we think there's going to be an impact is going to be one is speed and the other is accuracy.
Will it be able to help us identify the content even faster? If we're able to identify this content faster, what is the accuracy with which that, you know, it can ‑‑ we can take action in an automated way. Which also lessens the time that human reviewers need to spend on really the important cases, versus the cases where there's clearly very high confidence it is valid. It can be taken down. Taken down. To answer the, you know, to answer the issue in a shorter version, I think there's a lot more scope for the technology. There continues to be the importance of a combination of using automated technology as well as human review. We are taking the right action. We are taking the appropriate actions.
As we move to responsible AI, David, Meta has an open source approach to the large language models. It's been really important that we have a thoughtful and responsible way of how we're thinking of developing AI within the company. I'm going to talk about a couple of pillars that we have. We, you know, that we consider when we are talking about GenAI. Actually, I'm not going to go through all of the pillars. That's going to take a lot more time. I'll go through a couple of them.
The first is, you know, officially robustness and safety. It is really important that we do two things been using the large language models. The first is stress testing the model. So we have teams internally and externally with the models and stress testing or what we call essentially making sure that, you know, experts are stressed just in the models for finding vulnerables. We are supposed to identify the vulnerabilities. We have people stress testing the large language models. We also open it up to the larger public to be able to stress test.
For example, there's a conference call. Over 2,500 hackers were used to inform the development of the models. The second thing is fine tuning the models. Fine tuning the models essentially is fine tuning so they are able to give more specific responses. To give an example of this, currently what happens on Facebook and Instagram is if somebody posts the content, there are mental health issues. Either somebody reports it or we're able to proactively find the content, we're able to send resources to the person to help in the pipeline. If you are sitting in the UK, you get connected. This is because fundamentally, we believe that we're not the experts in safety in terms of providing this kind of informed support and to the point that Sophie made. We don't want the technologies to be able to provide that support. What we want to do is we want to make sure that we are, you know, making available the right tools. Young people can use the right resources.
Coming back to fine tuning our AI models, AI models will be fine-tuned through expert resources. If somebody talked about suicide or self‑injury, the response should be ‑‑ should not be that it is going to provide you guidance. The response is the busier list of expert organisations in the particular organisation. I know I'm repeating myself. This is a really important way in which we can use these technologies to provide a level of support that, you know, we have been able to provide on other platforms like Facebook and Instagram.
The third thing that I want to talk about essentially when we're talking about safety and processes is a lot of people don't really understand AI or AI tools and AI features. We're also working with experts generally to try to ensure their understanding. We work with experts to have resources where parents have, you know, tips on how to talk to young people about, you know, about GenAI, et cetera.
These are just a couple of things that, you know, at the high level overview we think about when we think about building AI responsibility. I want to quickly cover the other pillars. I won't talk too much about it. The other pillars that we're thinking about. It is about safety. It is about making sure there's privacy so that we have a robust privacy review to make sure there's transparency and control as, you know, everybody on the panel said. It is really important to be transparent about what you are doing with your GenAI tools and products. We are working cross industry to be able to develop the standards to identify and make sure that, you know, users understand whether the content is generated by AI or is not generated by AI.
The other pillar really is, you know, good governance if we talked about transparency. You know, transparency and as well as you mentioned this. Fairness is really important. Fairness to ensure that there's, you know, there's diversity as well as that it is in inclusive in terms of the technologies. We know that access to the technologies is still an issue.
That is our overall approach to responsible AI. Let me give you one example. I know I've talked about processes in safety. In terms of fairness and inclusivity we have a large language model where we are able to translate English in to over 200 languages. There are some of the lesson learned languages. I say this, because you are in the safety space. A lot of the material that we develop and a lot of the experts that exist are in English language. This is another example of not particular to NCII.
Overall in the trust and safety space that we can actually use a lot of these, you know, a lot of these products and tools that have been developed to further enhance safety and in the languages that people understand and not just English or western languages. The key area, for example, is using the strength to translate the content into the languages. I think there's two things. There's a lot more work to be done. I think that there's a great rule for collaboration in terms of both.
How we prevent this, how we address it, and how do we even collaborate better in, you know, in being able to support some of the people who were dealing with these issues and actually a better way that we've currently been able to observe. The last thing is sometimes we get asked this is a lot. We have a community standards which make it very clear whether and what kind of content is not allowed in the platform. Irrespective of if it is organic content or been developed by GenAI, we will remove the content. We've updated our community standards to make that very clear.
>> DAVID WRIGHT: Thank you very much. It is great, as well, to hear that also the use of creation and tools too to particularly interested in the translation to different languages. Which we probably all know is a real challenge and I know for Stop NCII perspective we struggle with that. Trying to make it accessible and the support as accessible as we possibly can. Thank you for that. Also for Meta's help and Meta's support with Stop NCII too.
Next, Sophie, I'm going to come to you. We've heard your work with the Revenge Porn Helpline. We know that particularly well. The question of what is posed is about what ethical dilemmas have you observed with technology to address NCII abuse, regarding privacy and content? How do you think AI systems should be designed to respect these sensitive boundaries? Sophie?
>> SOPHIE MORTIMER: Thank you, David.
It is a crucial question. The development of the technology is moving at pace. I think we could all get quite carried away with what we can achieve with the technologies. It is so important that we put the victim in survivor experience at the centre of them. I could probably talk for quite a while on this. I'll try to keep it a bit tighter. I think crucially supporting the privacy of victims who in a moment of absolute crisis is really, really key. We can use AI tools to help identify and remove non‑consensual content. But that requires access to people.
It is very sensitive images and data. That can be a huge concern to individuals who might fear the access of technology. It is technology that has participated in their abuse. They can fear data breaches or a lack of transparency on how their information is being stored and processed. There's a real dilemma there in balancing the need for intervention with the protection of victims and reservation of the privacy and stopping future harm. We can use AI technologies to track the use of someone's images. And this could be enormous. I think Deepali referenced this in terms of the use of technology to handle the scale of which and the speed at which the content can move across platforms. That just brings more complexity. The methods for tracking content can concern victims around surveillance.
This is a risk of creating systems that monitor individuals more broadly than was intended. How will the images and the data be used in a way that won't impact on people's privacy and autonomy? Then we and the use of people's data is always very, very concerning. It is very sensitive, personal information used to address this harm. We know they don't always respond to people of cultural or religious ethnicities. That's challenging for the risk of presenting false positives and false negatives.
One area that's referred to is deepfake. We can identify that as fake. The harm is last. I think the evidence isn't the case. I think just labeling something as fake can undermine the experience of individuals. There's a real loss of bodily autonomy and self‑worth. There's a broader impact on psychological. Certainly AI can help with evidence collection and privacy. The access to understanding and how the technologies work around the world. We can't assume consent.
It is really important that consent that's given is really informed. We have a lot of work to do there to ensure that that ‑‑ that we have that. It moves forward. Perpetrators move forward as well. For all of the safeguards we put inning we have to be aware that perpetrators are working hard to circumvent them, we need to be flexible in the thinking. I think the priority for me is keeping the human element. Humans understand humans. They can hopefully foresee some of these issues in combat ways.
But also, to put humanity at the heart of our response to individuals who are humans themselves to state the obvious. But don't want to be supported entirely by technology. They want access to humans. That human understanding.
>> DAVID WRIGHT: Sophie, thank you very much. Perhaps it is a point to talk and just the terms victim being used. I know we've had this conversation. I think there's been ‑‑ there's been criticism that we shouldn't be using this in terms of terminology. We shouldn't be using the word victim. That would be really survivor. We know particularly for revengeful; we very much do support victims. Our job is to make them survive.
Clearly, anybody is entitled to have a reference however they see fit. Whilst the job is to make victims into survivors of particular tragic circumstances, we are not always successful. Which just goes to highlight the enlarged in many cases, the catastrophic impact the abuse has on individual's lives. I don't know if there's anything that you want to add there.
>> SOPHIE MORTIMER: I think you are right. We tend to take a neutral position in speaking to people. It is not the as helpline. In practice, we reflect back to people. I agree. The majority of people coming to us are very much identifying themselves as victims, because we are usually there for them quite early in the journey. If he's absolutely our aim to make them a survivor. In the hope they can leave all of the labels behind and put this totally behind them.
>> DAVID WRIGHT: Thank you, Sophie. Nighat?
>> NIGHAT DAD: No. I think this is very interesting. In our helpline, also when we address folks who reach out to us, we are very careful. What do we call them? We leave it to them what do they want to ‑‑ want themselves to call. Many times we call them survivors. This is the priority. We call them survivors and not victims. They are reaching out and fighting the system. But I haven't received any remedy. I'm still a victim. This is an interesting conversation. It should entirely be on the person who is facing all of this to call whatever they want to call themselves. I'm not that resilient. Don't call me survivor. I don't have that much energy to fight back in the platforms or ecosystem that they are dealing with.
>> DAVID WRIGHT: Sophie, I don't know if you want to react to that. I'm here. I'm thinking around after when we are approached by the media, they want to speak to somebody that we've supported. We have the policy because of the acute vulnerabilities that the individuals have.
Sophie?
>> SOPHIE MORTIMER: I completely agree with Nighat. It is not our place to apply the label. Certainly, the majority of people who come to us would describe themselves as a victim.
In fact, I'm not sure I can call anyone that self‑identified without any prompting as a survivor. That is not how people are feeling in that moment and in that space. And the harm feels so out of people's control. Because they ‑‑ what has happened is on the platforms. We all know that images now can move fast. The fear and the loss of control is the overwhelming feeling that people have when they come to services like ours. That doesn't make anyone feel like a survivor at time. That's why we use the language in the first instance and reflect back what somebody will say to them. That's how they are feeling. I hope that we provide that reassurance.
When they hang up the phone, they will be feeling better than they did when they picked it up.
>> DAVID WRIGHT: Thank you, Sophie. Also, for all of the hard work in the background as well. Okay. Finally. I'm going to turn to Boris. I say finally. That's after Boris has given us the contribution, this is when we open the floor for your questions. Either in reaction to anyone that you heard or aspects for anything that we haven't covering. Both in the room and online. He's had partnerships and engagement.
Given your extensive work in online safety, how to you see AI evolving? What are the ethical frameworks that are necessary to avoid potential harm while supporting victim or survivor for whichever we deem fit to support them? Boris?
>> BORIS RADANOVIC: Thank you for that very much. I just want to say I'm honoured and proud to sit amongst heroes in the space. A quote came to my mind. Agree or disagree with me.
Specifically talking, we know a little bit everything and a lot about nothing. If we understand the complexity, I don't think we understand the possible power of AI. I know trying to unpack, it might be different. Stakeholders from the industry perspective. I love the stress test and using hackers and all of that. I would also advocate as we had a conversation about users and victims and children. A lot of people that we don't maybe fully grasp to be the first movers that test out or stress test those AI models, so we can see maybe a different way of thinking.
Talking about that, I think we need to go back to foundations. The current models may or may not and in some cases they consistent of the data sets having material. Let's clean out the fundamentals of the tools. You can use hashes to help you clear out known instances of image abuse. We must go further. We listen to contributions from every speaker here, I hope somebody is listening to me and will prove me right. We're missing a global force to focus on safeguarding and for the countries having the problems. He is definitely willing to support. Let me come back to the question about detection and intervention. That's an important two piece of the much larger picture. We can talk about behaviors from the perpetrator and we need to utilise AI tools and help us mitigate some of the issues.
As well we need to talk about how do we then engage with the perpetrators after we detected it. How do we guide the people to the right course of actions or what are the consequences of their repeated and sometimes we know those are happening on the platforms, repeated offences or people or individuals taking part in something called the intention collector. The question is, okay, we use AI to detect this. What are we going to do next? We're talking about intervention. We are using an innovative way of using AI tools to help us mitigate number of reports using chat box function that allow us to support without just human support.
When we talk about intervention, user specific and mental health based. I think the question was about framework. We are missing a lot as I said in the introductions. Governments and framework and structures. The ethical framework needs to be user focused and user centric. Victim informed or survivor informed most definitely. Then balancing the threat. Having access to the most sensitive pieces of data that you have. That's your own or others abuse. How does it unfold and to whom and where? Extremely sensitive that we might learn from and research and mitigate the risk in the future. I'm not trying to say this is an easy thing to do. I'm saying that we should start combating it now before we are ending up in a much, much, I would say more difficult space to untangle.
If I'm looking at it, what do we need to do? We are at 12 of the biggest platforms. We need hundreds and thousands of platforms dedicating to this of advocating for solution in the space. It is incredibly more investments to NGO and research across the world. We are at the forefront. We're non‑governmental, small and agile. We are meant to be at forefront.
As we know with every arrow, there's a long, long piece behind it that needs to be pushes you forward. Absolutely, we need more transparency. Please agree or disagree with me. Now the first movers and the companies that we see in the AI space, correct me if I'm wrong it seems to me they are more interested in safeguarding the Intellectual Property and finances, instead protecting and safeguarding the user. I think that's a big question. If we move with the AI as a part of every part of the daily life, what do we value more? I think many of us sitting here and many of us listening would advocate for the privacy and protections. First and foremost.
Then we can build upon the tools. Maybe in the end, I was trying to find a picture that helps me better understand the extremely rapid rise and development of AI. I don't know if you saw the first movies and the pictures of the Wright Brothers and planes. After a couple of meters, it crashed. They spent months developing meters and hundreds of meters. We evolved something rather slowly and rapidly. The plane. Something that brought us all here. I think with AI, you are slowly moving at light speed pace of development. We have no idea who is flying. We have no idea how we're going to land. I fully advocate that we need to fix the foundations and invest more in clearing the data sets.
Invest in the NGOs around the world battling the issues and trying to find the solutions and help us all understand better and use AI better, so hopefully we can land safely and find a better and powerful use for the benefit of us all. That would be it. Thank you so much.
>> DAVID WRIGHT: Thank you, Boris. There's a point to land and finish on, forgive me. If anyone does have any ideas about how NCII is going to land, we would very much like to hear that. Okay. Now we're going to turn it to you. We're going to turn it over to you in terms of any particular questions that anyone has. Niels, have we got any questions?
>> NIELS VAN PAMEL: Not yet. But I have a question. I'm Niels from Child Focus which is the Belgium Safer Internet Centre. I definitely agree. Almost everything has been said here. With the comment from Sophie how deepfakes right now maybe we are focusing too much in to showing us something is fake. It doesn't matter for the victim. It is, for example, somebody who is a victim of deepnuding with fake naked pictures and everybody believes to be real. It was in the study last year on the deepnuding. Seeing how first of all how the market looks like and what is happening and how this is exploding in our faces. We've seen the impact for the victim exactly the same. Compared to victims of real NCII; right?
First of all, we need to debunk some myths. I wanted to, I think it was Boris that said that we have to take into account how fast things are changing and moving right now. Also with jumping to conclusions. Give an example in the study. We noticed and it is a study from 2023. 99% of all of the victims of deepnuding are women and girls. This year, 50% of the cases that we open at Child Focus were also men that were victims. What we concluded in the early days, 2023, most of the victims were girls because the data sets that were used were only working on girls and women. But right now in a world where sextortion which is perpetrators are using AI also much more on their behalf. The technology also works in to having the voice in to how to say this, make deepnudes with a voice. If we don't do more research, we might look over them. We need to follow up and do more academic research. That was my comment here.
>> NIGHAT DAD: Can I just respond to the men being more victims. We thought it was women and girls. In 2015, we started the helpline only from women. When men are reaching out to you from a context and culture where shame is associated with anyone, man or woman. What we notice is that young men have nowhere to turn to. There was comments for women and psychological support and the cyber harassment helpline was the unique one. There were other helpline for women. None for the man. We ended up dealing with their complaints. Another thing that we noticed that men and young boys and men were hesitant to go to the law enforcement as well.
Again the culture of shame associated to it.
But also I think now this is more related to privacy. They were really scared like women to give their evidence to the law enforcement that how they will deem and protect my data when I give it to them as evidence and work on my case. What they wanted, basically, was to report to the platform first. The first line of reporting was always the helpline and platform instead of law enforcement. It kind of touches upon that it is beyond any gender or sex. This embeds everywhere. Especially from conservative cultures, you know, woman still find some space to talk to each other. Young boys just suffer in silence.
>> DAVID WRIGHT: You said boys are scared to go to law enforcement with evidence. I guess that's where all of the device hashing comes in.
>> BORIS RADANOVIC: Wonderful. Thank you so much. Niels, thank you for the comment. It proves the modalities are changing rapidly. We who our job is to follow them. I love describing the deepfakes. The image may not be real. The harm is. We need to understand it in a fast‑evolving AI visual space that we have more and more AI tools that are being developed based on one prompt that will design you a couple of minutes of video.
That unfortunately use case will extend and unfortunately in far more wide region that we can have fake or digitally altered imagery and now videos that might seem or might not be real, but the harm. We don't need another reason. We don't need more experience. We know the harm perpetrated amongst the victims and users will be real. Thank you so much for that comment.
>> DAVID WRIGHT: Okay. Just to carry on the theme. Sophie, any response to that? I'm particularly interested in knowing that increasing call volume as well as Nighat said changes in terms of gender?
>> SOPHIE MORTIMER: Yes. It was interesting what Nighat was saying. We've always had a substantial portion of cases and male victims. AI is used and the victim and survivor thinks they are talking to. It can be AI generated. That, of course, just wraps up the scale of these forms of abuse and practically crimes. The other thing that struck me is I looked at some cases earlier in the year. We had surprised me slightly at time a number of cases that came from women from specific cultural and religious communities in the UK.
They had content created that wasn't necessarily in the definitions of intimate content. But presented them in situations that would have been very, very harmful to them in their own communities. I know it is always the stable that we refer to of a woman without wearing a head scarf. That was the reality that people are experiencing. That can't cause enormous harm. I think we need to be aware that there are broader definitions of intimacy globally. We need to be very nuanced in our responses. Also how these technologies can be used to cause other forms of harm as well. There are huge challenges here.
>> DAVID WRIGHT: Tenfold increase in case volume in the last four years.
>> SOPHIE MORTIMER: That's not. Case numbers continue to rise. In the last four years, they have risen exponentially.
>> DAVID WRIGHT: Thank you very much.
>> KARUNA NAIN: David, I don't know if you can see me. Just check in with Sophie and Nighat based on what they are seeing on the helpline. The initial research I've seen is usually it is more financially motivated when it is related to men and boys. With women there are other motivations at play. Is this consistent what you are seeing on your helplines or what are you hearing from people who are calling in?
>> NIGHAT DAD: Yeah. I want to respond to that, Karuna. It is changing from men that are public figures. They are one way of intimidating them in to silence basically. It is also changing by the bad actors.
>> DEEPALI LIBERHAN: Just a point that companies can play in disseminating a lot of education as well as resources. That's really important as well. I know a lot of people mentioned sextortion. We worked to develop the messages that is important for young people. Young women and young men. They do hear. That's something that is more of us in collaboration. I think that everybody is doing things in isolation. I think that there's really room for collaboration in those spaces.
>> DAVID WRIGHT: Thank you very much. I'm going to ‑‑ okay. Looks like we have a question.
>> AUDIENCE: Thank you. I'm Edmond. I work in the European Region. Before that, I want to thank all of the pannist for the valuable insights and thoughts. I want to talk about accountability and how we can promote accountability. The perpetrators on the platform, for example, they are not conducting one crime and leave. They will be posting or using the content from someone else later for someone else. Is there anything that those companies do with regard to holding them accountable?
The second question will be: I know there's always a line when we talk about collaboration with courts and the judicial authorities. Handing over dividends and materials that will be removed. That will help accessing justice for those. A lot of time when women seems assistance, some of them seeks stopping the content or removing the content. Others they want justice. They want the perpetrators to be accountable. Thank you.
>> DAVID WRIGHT: That's a great question. Thank you very much. Panel?
>> BORIS RADANOVIC: It is regional for the technology.
>> DAVID WRIGHT: Okay. One more question from the field.
>> AUDIENCE: I'm a researcher in Germany. I'm actually reporting hate speeches and actually abusive content on the weekly basis. I do it not just for work, but on the personal level. The problem is there were three possibilities.
Only one request of mine was accepted by Meta. The second situation is there was no response at all. And there was no way for me to challenge or continue my question or send any follow‑up request. And the third possibility was that there was no acceptance of my request. In the case of wanting to follow up with my own request or challenging the Meta decision, what would you suggest me to do and I also want to ask, like, what is Meta's take on punishing the perpetrators of the images? Because I know that so far the highest, you know, punishment is to deactivate or delete the account. My question to the woman in the helpline. I'm sorry. I don't remember your name. In your experiences, where there any woman or gender diverse people who complain about sexual abuses. Because actually, I'm also doing research on online gender‑based violence.
In my own research, there's a lot of gender diverse people who actually face, you know, the issues. Also how would you reach out to people that don't really understand the issues and have no hope of addressing their issues? Thank you.
>> DAVID WRIGHT: Okay. There's a lot of commonality between the two questions. There's a relevant point. The question about particularly to do with prosecution. The first question. Anyone wish to respond?
>> NIGHAT DAD: I can respond from Meta's perspective. We work with law enforcement agencies across the global. In terms of when we get valid and legal requests, we will respond with the data that's required to prosecute. It is the job of the prosecutors. We also disclose in transparency reports that we publish in terms of the number of data requests that we've received from authorities and how many data requests that we've complied with. We also have teams who work in Meta who are working directly with law enforcement authorities to ensure the crimes. We have someone in case they need a point of contact. There's less visibility that we have in terms of prosecution. We have the issue of child sexual abuse. We are required to report as a U.S. organisation. We work with law enforcement to make sure it is available in the manner. We don't really have visibility on how the data is being used to persecute.
That's an important part of the chain that's missing. One of the things we talk about, it is a whole chain. Somebody asks what do you do in addition to deplatforming. I think that all stakeholders have a role. We can remove the content. We can platform. There needs to be transparency. In the considerates, a lot of these crimes maybe report and not necessarily prosecuted.
For a number of reasons, including lack of capacity, lack of understanding, and lack of resources or just, you know, just the inability to prosecutor.
>> NIGHAT DAD: Yeah. Responding to the research. Not only as a helpline and sitting on the oversight board, we investigated a bunch case of deepfake images. One from India and one from the U.S.
We actually recommended so many things to Meta around the gaps that we saw. One thing that was clear to us is Meta platforms need to create pathways for user to easily report this type of content. And they must act quickly as well. It shouldn't matter if the victim is a celebrity or regular person. What we notice that in the cases that we picked up were of the celebrity. They were public people. Public persons.
Then when their content went viral, that's where we took up the case. But what exactly is the mechanism at Meta in terms of giving importance to every users report is a matter of concern. I would also say what we do as a helpline, we raise awareness a lot in different institutions, schools, colleges, and try to work with the government although it is not their priority. Just to let people know they are kind of crime that exists. There are remedies to reach out to. And I think you raise a point around repeated offenders. I think that's also a point of concern for us. They find a way. What they do with the repeated offenders.
>> DAVID WRIGHT: Thank you. Also to do with Carissa's question. Sophie, I think this one I anticipate that you may have a response to as well. In terms of, Niels, the question was around the existing framework holding any weight for the NCII. I suspect you have a response.
>> SOPHIE MORTIMER: Thank you, David. I'll try not to take too long. It is in the evidence. In the legislation we've had no consent for sharing of the images. The collection of evidence represents challenges. There's no support. There's no consistency. There's something around evidence. We have provided statements to the police. Just to establish what we've done. Facts, dates, the links that we have removed. There's a bit of work around categories. That's a massive barrier to people coming forward. We could do some work around what should be accepted by courts. All of the individuals don't have to view the content. That would be quite a supported measure to get people coming forward and supporting the prosecution.
In terms of legal frameworks, this is nearly ten year. It wasn't great legislation to start with. But they responded fairly quickly, government. It was much more comprehensive. It focuses on the person rather than the intention of motivations of perpetrators. That's quite a powerful step forward. It is still current in legislation around the world. There's more to do. In terms of the status, we are campaigning in the UK for non‑consensual intimate images. It is illegal content. To be treated in the same way to give us the same powers to remove what we already remove. We can't.
There are multiple non‑compliance sites whose business model is based on the sharing. They don't comply with us and other regulators. They are posted in counties beyond the region of regulation. It is important to find other ways of leverages the law to make the content visible and give people the security to move on with their lives. Not fear that images are two or three clicks away from being viewed by anyone.
>> DAVID WRIGHT: Thank you, Sophie. I want to give a shout out to you as well to the draft UN Cybercrime Convention that was published in August and particularly UNODC's global strategy.
In terms of the inclusion of NCII, it was much to our surprise. The conclusion of NCII within the new cybercrime or at least the draft cybercrime convention which we're anticipating would be ratified next career. All states would have laws to do with NCII. Perhaps in response to the question, we have some today.
There are some that wait. We've heard from Sophie. There's some degree. But they can prove quite porous. Optimism around a push and direction in terms of laws that will help in this regard. I'm conscious that we've only got a couple of minutes left. You wanted to make a comment quick? Boris?
>> BORIS RADANOVIC: I'll try. Thank you for the questions. As well. Far from me to say working in the NGO and in the perspective, all of the three questions come back to the same thing in my mind. Accountability and legal frameworks and reporting. It does come back to the conference is an Internet governance forum. I don't think the scary question is what is Meta going to do? What are we going to do? How are we going to define the legal framework and governance to make sure the platforms follow that and have accountability on their end. That's a difficult question for us to define.
Absolutely, the legal frameworks need to be more inspired around the world. More forward looking. We as a society in all of the cultures and different nation states defined how do we approach the accountability in the abuse in the digital space. How do we hold those accountable? It is a far more diverse question that we need to discuss as a society that one stakeholder can answer. I'm here for it. If anybody has a good idea or inspiring legal framework around the world, share it.
>> DAVID WRIGHT: Which will probably have to be the closing remark. We've run out of time. The transcribe stops.
I hope we've given you some sort of written response here. Also the panel, you know, we've always said. World leading panel in terms of insights. So I pay tribute to all of your work and I would invite everyone to show our recognition for the ‑‑ both the extraordinary work that the people do together with the panel session as well. Thank you very much.