IGF 2024 - Day 1 - Workshop Room 10 - OF 68 Countering the use of ICT for terrorist purposes

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JENNIFER BRAMLETTE: Good afternoon, ladies and gentlemen. Just doing a mic check. Good afternoon, it is working, excellent. Yes, mic check, mic check. Everybody can hear? Excellent. Distinguished colleagues, good afternoon and welcome to all here in the room and joining us virtually for this forum on counters ICT for terrorist purposes.

I welcome you on behalf of Counter-Terrorism Director, Natalia Gherman. It is a pleasure to hold CTED's event here in Riyadh and an honour to be here with some of CTED's close operational partner, Parliamentary Assembly of Mediterranean, or PAM. The terrorism branch of the United Nations Office of Drugs and Crime ‑‑ terrorism joining – (audio difficulties) Global Internet Forum to Counter-Terrorism or Global Internet Forum to Counter-Terrorism joining us virtually.

I would like to begin by explaining the work of CTED. As a special political mission supporting the Security Council's committee CTED is mandated to conduct assessments of Member State's implementations of United Nations Security Council resolutions on behalf of the committee. CTED identifies good practice and gaps in implementation, for which CTED works with partner organisations and states to facilitate technical assistance. CTED is additionally mandated to identify emerging trends and evolving terrorism threats, including through collaboration with the members of CTED's global research network.
     Terrorist groups and their supporters continue to exploit the Internet, social media, video games and other online spaces, as well as emerging technologies to engage in a wide range of terrorist related activities. Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real‑world damages.
     Yet these valuable technologies offer immense benefits to society. When used in a manner consistent with international law they can be most useful for preventing and countering terrorism. When it comes to countering terrorism and violence extremism conducive to terrorism, the United Nations Security Council has developed robust resolutions and policy documents. The council has adopted 16 counter‑terrorism‑related resolutions and five policy documents over the past 23 years that specifically address ICT and now emerging technologies.
     Through these the council has mandated CTED to work on a growing list of increasingly complex and technologically advanced issues relating to countering the use of ICT and other emerging technologies for terrorist purposes. As such, CTED is mainstreamed to ICT related issues including AI and other emerging technologies into its work streams.
     In our capacity to identify new trends and emerging threats CTED draws attention to how exponential leaps in development and applicability of developmental tools and emerging technologies could enhance terrorist capabilities.
     CTED also identifies what legal, policy and operational measures UN member states could implement and how they could use new technologies to increase the effectiveness of their counter‑terrorism efforts. For example, in 2022 Delhi tasked CTED for non‑binding principles to counter use of unmanned aircraft, or UAS, new financial technologies and ICT for terrorist purposes. The Abu Dhabi guiding principles on threats posed by use of unmanned aircraft for terrorist purposes were adopted in December 2023.
     The committee is currently negotiating guiding principles on new financial technologies and will turn its attention to ones for ICT. In carrying out its various activities CTED holds two main principles at the forefront. Firstly, we draw particular attention to respect for Human Rights, fundamental freedoms and the rule of law in the use of ICT and new technologies by states when countering terrorism. We also promote whole‑of‑society, whole‑of‑government and gender‑sensitive components for counter-terrorism efforts. We consistently emphasize need for cooperation, collaboration and partnerships.
     CTED follows an inclusive approach that brings together member states, international, subregional and regional, private sector, Civil Society and academia. This is a component of a multistakeholder digital environment. It is necessary for member states to develop advanced counter-terrorism regimes. I will further detail CTED's work on ICT in the technical panel. Now it is my great pleasure to welcome the honourable Mr. Pedro Roque, Vice President of the parliamentary RI assembly of Mediterranean and long‑standing partners in fight against terrorism to take the floor. Sir, I yield the floor to you.

>> PEDRO ROQUE: ‑‑ you can hear me? I think now is fine. Thank you so much. Ladies and gentlemen, dear friends, it is an honour and pleasure to address the opening of this event. PAM, the Parliamentary Assembly of Mediterranean values the most fruitful cooperation with CTED with result in invitation to PAM in order to join the CTED global research network as well few other significant outcomes I will mention during this intervention. I wish to mention colleagues, the global Internet forum to terrorism and attack against terrorism for all the work you ‑‑ AI and ‑‑ (audio difficulties) ‑‑ national organisation ‑‑ is okay? PAM is an international organisation which gathers ‑‑ (audio difficulties)

>> PEDRO ROQUE: Sorry, PAM is international and gathers parliaments from the Mediterranean and gulf regions. At present PAM members are committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies supporting the efforts of the United Nations and the international community in this regard.
     If not properly regulated in a timely and effective way, the rapid advancement of AI and emerging technologies could severely harm democratic systems, disrupt societal structures and pose significant risks to security and stability. Concrete action and legislative frameworks for regulating AI and ICT should build on a multi ‑stakeholder collaboration while ensuring compliance with international Human Rights law and protection of individual's fundamental freedoms.
     At the request of the UN Secretary‑General, PAM participated to the assignment of future held in New York last September in conjunction with the summit PAM organised high‑level site event on parliamentarian support in re‑establishing trust and reputation in multi ‑lateral government. This event was held in cooperation with CTED and Morocco and Italy and interparliamentary union. To achieve this PAM committed to implementing the compact and annex the digital global compact including promoting a scientific understanding of AI and emerging technologies to evidence‑based impact assessments, as well as evaluating their immediate and long‑term risks and opportunities.
     Dear friends, through 2024 PAM experts supported by our Center For Global Studies, CGS and in partnership with CTED devoted major part on monitoring and AI and emerging technologies as they are abused by terrorists and criminal organisations. PAM CGS has produced and recently released a report entitled, the Malicious Use of AI and Emerging Technologies By Criminal Groups Impact On Security and Governance.

This in partnership with CTG not only benefitted from free‑standing sites by PAM member parliaments but also went through rigorous peer review process conducted by several PAM strategic partners including, among others, Amazon, Interpol, (?) Network of organisations, NATO, Policy Center for New Assaults and ODP. First the creation of the PAM permanent global parliamentary on AI and ICT designated as platform to monitor, analyse and promote and advocate for effective legislation, principles and criteria. The observatory is in Republic of San Marino and reported by PAM CGS. In seconds, the publication of daily and weekly digest compiled from open sources providing PAM parliaments and stakeholders with up‑to‑date news and analyses on trends in AI and emerging technologies.
     The digest covers key areas of interest including governance, security, legislation, defence, intelligence and warfare. In conclusion, I would like to highlight two important resolutions that PAM parliaments adopted during the 18th PAM plenary in May 2024. One resolution focused on digital implementation, the need to bridge digital divide and promote equal access to digital technologies both across and within PAM countries. It also acknowledges the role of digital transformation in advancing the achievements of the UN sustainable development goals.
     The second resolution addresses artificial intelligence, urging the allocation of resources to advance AI research and development with an emphasis on fostering innovation while safeguarding Human Rights, fundamental freedoms, privacy protection and non‑discrimination. PAM will further explore these issues at its 19th plenary session February 2025 in Rome and during the new tenure as presidency of the coordination mechanism of parliamentary assemblies on counter‑terrorism including its political dialogue pillar.
     Additionally, I would like to inform you that PAM CGS is currently working on two new reports. One focuses on the resilience of democratic systems in relation to the misuse of AI in new technologies. Another at the request of CTED on use of spyware and its legislation. PAM will continue to cooperate with United Nations, Internet Governance Forum, member states and all stakeholders to shape a safer and more equitable digital world. I thank you for your attention.

>> JENNIER BRAMLETTE: I would like to thank you, Honourable Mr. Pedro Roque. Both from the regional perspective and from key government actors and partners, namely parliamentarians. I don't know if anybody in the room has been able to sit in on any of the parliamentarian track that is happening at the far corner, but the speakers there are phenomenal. The parliamentarians present are so engaged. It is essential to have all of government on board, including the elected officials.
     So as I wearing the hat of the CTED director and mentioned I would like to come back to the technical aspects of the mandate as given by Security Council resolutions. Would somebody be kind enough to shut the door? Not that it will block the microphone from the other event that much. That is great, thank you so much.
     Some of our mandates are widespread for ICT. The specifics of it include preventing the use of ICTs for terrorist purposes, including for recruitment and incitement to economist terrorist acts, as well as for the financing, planning and preparation of their activities. We have mandate for countering terrorist's narratives online and offline, gathering, processing and sharing digital data and evidence. Cyber security only in relation to protection of critical infrastructure and terrorism through payment methods like crowdfunding.

CTED is looking at evolving threats in terrorist use of ICT to include threats and risk relating to advances in AI, roll of algorithmic amplification in harmful and violent content, misuse of gaming platforms and related spaces and risks associated with terrorist exploitation of dual‑use technologies like 3D printing and advanced robotics.
     As part of its work on human rights and fun mental freedoms CTED addresses programming behind AI and algorithm systems to ensure it does not include bias, for example. We look at privacy, data protection and lawful collection, handling and sharing of data and transparency and accountability for governments and the tech sector when it comes to content removal, practices and data requests.
     Through its many assessment visits, CTED has noted member states face a range of challenges when it comes to countering use of ICT for terrorist purposes. Many stem from sheer numbers and diversification of users across a multitude of decentralized online spaces and myriad of digital tools. Also, the rapid increase, availability and technological capabilities of AI and other emerging technical tools.
     Of course, the continued social, economic and political drivers of violence, extremism and terrorism. The three together make a perfect storm for terrorists being able to operate with sometimes seeming impunity with many of the challenges that Member States are facing.
     Where some of these challenges really come to bear is how they are incorporating ICT into their own counter‑terrorism systems. Both in consideration of their existing resources and capabilities and in respect to compliance with their obligations under domestic and international Human Rights law. So for example, there are Member States who are extremely technologically advanced who have no trouble bringing new tech in and on boarding it using virtual reality and augmented reality systems to test strategies to work through contingency plans for training in the event a terrorist attack does happen. Whereas other Member States have trouble getting electricity to their police stations. As technology increases, this gap is widening.
     One of the biggest capacity gaps we note with Member States is shortage of tech talent and cutting‑edge equipment in government entities. Issues of how to build that tech talent and attract it into government positions, then retain it when the private sector and other avenues offer greater financial rewards are pressing questions. There are no simple or inexpensive solutions.
     Another common shortfall observed in many Member States is the criminal justice system, especially in traditional criminal justice systems, they are just not designed to address crimes committed in online spaces or through cyber means. So where you have countries who are still meeting in courtrooms without video cameras, without screens, without a capacity to handle electronic evidence or do video interviews, it is almost impossible for them to prosecute crimes that are committed online where you are entirely reliant on the admission of electronic evidence and other digital tools and digital forensics to build a case and for a judge to try it effectively.
     Also most states don't even have on their books laws to deal with crimes committed through or by artificial intelligence. We've even been asked by authorities like how can we arrest a chatbot, how can we prosecute an AI. Those are really good questions. There are no templated answers. And perhaps Ari can talk about if there are plans for UNODC or entity to build a model law. There are jurisdictional complexities in Cyberspace. For example, grey area content could be illegal in one country but not in countries bordering it.
     So like the examples outlined by PAM many states are working to build across border consensus and multi lateral frameworks to deal with these and many other ICT‑related challenges.
     The CTED, counter-terrorism committee and United Nations Security Council are working through inter international frameworks. In guiding Member States on index, CTED collaborated with over 100 partner agencies including law enforcement and security services, legal and criminal justice sectors, capacity‑building entities, the private sector, technological companies, academia and Civil Society organisations to gather good practices and effective operational measures for ICT and emerging tech. Some of the areas include the conduct of regular risk and redness assessments. This is something that has been identified as good practice but not nearly enough member states do it. They might do it once. They might not do it at all, but very few conduct regular risk and redness assessments. By readiness assessments I mean a state looking at its own capacities, its own resources and a future look as to whether or not what it has ordered through it procurement processes is going to be useful when it finally gets delivered three years down the road.
     Other areas of the guiding principles include the need for updating counter‑terrorism laws and regulatory fashion. The development of guidelines for strategic communications and counter‑messaging algorithms, this is both for states and for the tech companies.
     The creation of content moderation and cross‑platform reporting mechanisms and recommendations for online investigations and how to more effectively and lawfully handle digital evidence. CTED catalogued these effective practices and noted a number other ones relating to safety by design, ethical programming and conduct of security and Human Rights impact assessments for AI and algorithm‑driven systems.
     We also captured the positive impact already demonstrated by investment in digital and AI literacy programmes for all levels of society.
     We further developed the guiding principles to ameliorate a range of concerns about the serious adverse effects on Human Rights that the use of new technologies by states without proper regulation, oversight and accountability is having.
     I would like to conclude by highlighting many Security Council resolutions and Delhi stressing the public‑private partnerships. CTED is the terrorism -- and Christchurch joins. Up next I would like to join the floor to Arianna Lepore at the Terrorism Prevention Branch from the United Nations Office of Drugs and Crime, another partner and dear friend, to discuss the ICT and work on electronic evidence.

>> ARIANNA LEPORE: Thanks, Jennifer. We have a long‑standing partnership and succeeded with PAM and colleagues, Irene Adam and great to continue our dialogue together. The work of UNODC blends with CTED, normally our colleagues in CTED inform work, thanks to assessment and thanks to the mandate that UNODC in the fight against terrorism, UNODC and terrorism branch put together programmes to build support in fighting terrorism. UNODC operators under conventions and resolutions and general secretary plan.

We have an update, which is very stringent. Since a few years now we have been working very much, sparing no efforts on the issue of ICT. As we go forward we are expanding and eliminating strategy dealing with emerging technology. In 2017 after adoption of 2322 that was devoted and requested Member States to increase the level of international cooperation, in particular electronic evidence and we launched the council, which I coordinate. This was conceptualized with colleagues at CTED and International Association of Prosecutors.

Now it is a flagship project of UNODC, the terrorism prevention branch is in Vienna but D.C. has regional country offices, including here in Saudi Arabia. My colleague is head of the office here.
     So we have the capacity to reach out the ground level also and create very close relationship with the practitioners we work with. So the global initiative was launched seven years ago. The purpose was exactly that, to foster public private partnership thanks to efforts of CTED and work closely with private sector, that the initiative is fully‑fledged project that has a holistic approach. So in private sector, involves the experts, the practitioners in academia.

And we developed different streams of work. The goal is to support law enforcement, prosecutors, judges, central authorities, competent authority for international cooperation in preservation and production of electronic evidence for criminal cases.
     How did we do that? Through the development of tools, which is our bread and butter, including the development of model legislations, of course. Now I'd like to focus our attention to which is ‑‑ the one that is the main tool of this global initiative, the practical of the electronic evidence. It has been extensive work with colleagues in the tech industry and it is a guide, technically a guide, a manual that step‑by‑steps informs criminal justice practitioners are now to request for preservation of electronic evidence, emergency disclosure, voluntary disclosure.

Where, depending on data that is requested, that requests are not possible, how to begin formal legal assistance process. It contains a mapping of now more than 100 service providers. At the moment ICT service providers. Nevertheless last week in Vienna we conducted the very first test meeting to include the Fintech providers. So to create the link between electronic evidence but also financial electronic evidence. We heard from practitioners this is more and more an emerging need to connect the two.
     Very soon we will have LAN next to the guide contain a mapping of vast Fintech providers, how to approach for preservation, disclosure, so forth and all the procedures that entails.
     The guide also contains some other forms to request the private sector those informations, because back then we were hearing and seeing that in particular the complaints of the criminal justice officials, they would send requests, the sector would never answer, but then spoke with private sector and said type of requests that would receive were impossible to be answered, tech and terabyte of material, ten years of evidence being requested, impossible.
     So we tried to sit them all around the table and developed forms which contains diligently all the elements that would enable the private sector, the providers to respond.
     So the guide is the main tool around which all the capacity‑building support UNODC office is constructed. The guide is global in nature but more and more advancing in our programme we have customized the guide, tailored to specific member states they have requested, so we have a customized guide for Pakistan, India, the Maldives so do research on procedural and legislative load and design a guide specific for that country.
     In order ‑‑ and this is one of the rare communities, to make our work sustainable we developed training modules to embed this guide within training institutions so the transmissional knowledge is up and running. The legislation mentioned by Jennifer is fundamental. UN sees in context of its work in all crime types but in specific it was in 2021 we have updated the UN legislation on mutual assistance, which contains provision on handling, receiving and transmitting electronic evidence. So when countries are up to updating their MLA law, they can go come to us, request assistance and see what type of provisions we have put together.
     All of this, all those tools are available on our platform. We have created an electronic evidence hub. We have not stopped there. There are two last point, which I'd like to make. The first one is that, as Jennifer was saying, colleagues would probably also, introduce is the fact that technologies advancing. Jennifer mentioned some of the challenges that we will face. We already facing them. Artificial intelligence and all those emerging technologies so this is also expanding in strategy, how to go about this. So to counter the misuse of technology, but also to utilize technology to counter‑terrorists. So there is this dual challenge that UNODC will try to address and you will learn more about our interventions.
     Last but not least, a word on the new convention on cybercrime. It is known to everyone. In the next few days. Possibly, not for sure, the next United Nations Convention Against Cybercrime will be adopted by the United Nations General Assembly, its Convention on Cybercrime. Nevertheless, is important segment in the draft, as it is now. That speaks about electronic evidence. So obviously that will also inform the work that we are doing. We will monitor closely how the adoption goes and what will be then the next steps when it will be ready, the articles and so forth. Jennifer, I will stop here. Thank you for this opportunity.

>> JENNIFER BRAMLETTE: Thank you, Arianna. I would now like to turn the floor to Mr. Adam Hadley, CBE, executive director and founder of Tech Against Terrorism. Adam, the floor is yours.

>> ADAM HADLEY: Jennifer, thank you very much. Can you hear me well from there? Yeah, great. Wonderful. Thank you very much for having us today to present the work of Tech Against Terrorism and some of our concerns at the IGF. We suddenly consider the IGF a vital forum to discuss important matters such as the terrorist's use of the Internet.

I would like to frame out a discussion around a paradigm shift and how we view the terrorist use of the Internet. Historically the use has been seen as a tactical tool for recruitment and radicalization. Increasingly our concern is the Internet is becoming a strategic battleground for terrorists and nation states but mainly terrorists. As well as sharing three critical challenges we see at tech Against Terrorism outline positive potential for generative AI and focus on use of the terrorist's use of the Internet infrastructure, in particular terrorist‑operating websites. Who are we and our mission? Our mission is to save lives by disrupting the terrorist use of the Internet.
     We are proud to have been established by UN-CTED way in 2017 as public‑private partnership bridging the divide between private and public sector. Accordingly, we have been recognized by a number of Security Council resolutions and, as Jennifer mentioned, the Delhi Declaration. We have been referenced in Security Council Resolution 2017 in supporting the government of Somalia by Al-Shabaab and improve connections between companies and governments. We are a small independent NGO in London and work across the entire ecosystem and aim to understand where terrorists are using the Internet and what practically can be done about this.
     We are global in approach and have 24/7 hour coverage. I would also like to recognise the great efforts by many other organisations as well as UNCTED, the Internet form, Christchurch call to action and Tech Against Terrorism Europe funded by EU and gaming network and delighted Aaron is joining us.
     We focus on the most egregious examples and focus on those terrorist organisations designated by the UN, U.S., E.U. and other international bodies. This doesn't mean we don't focus on the broader range of activities that terrorists conduct online but rather we believe it is important to focus where there is consensus, recognizing that it is in that focus where we will be able to have the most impact.
     So in terms of the teams of Tech Against Terrorism we have an intelligence team, we work with governments, build platforms, build capacity and work technology to speed up the ability of ours and other's terrorist use of the Internet.
     In doing all this we aim to share resources cost‑free in a collaborative way. We have a number of resources that are available to platforms and governments such as the knowledge‑sharing platform. We also provide hashing services and other technical support services to platforms including a trusted flagger portal.

In terms of the current landscape, we argue currently we are seeing some of the most egregious examples of terrorist use of Internet in the last decade. Of course platforms and governments face many threats. Geopolitical instability now is highest it has been for many decades. Therefore, understandably platforms and governments have many concerns to focus on. What is certain is that counterterrorism is no longer the primary concern of many of these stakeholders. Arguably, it should be.
     Since October 2023 we have seen terrorists’ content online reach unprecedented levels, from terrorist organisations such as The Islamic State, Al Qaeda, Hamas, Hezbollah and more. The terrorist use of the Internet is such high levels we are not sure what to do about it alone. Therefore, we call on improved action from the tech sector from governments and others to ensure the correct and appropriate level of resources are being brought to bear to tackle this.
     In our view, this threat is manifest, of course, off-line more than anything else. We know terrorist groups are regrouping. We now see attacks in Africa are very high. We know the risks from coming from central Asia with regards to biassed KP. Their use of the Internet is commensurate with this increased threat. The question is, what to do about it.
     So the outtake against terrorism, we have some technology for content and TCAB that seeks to identify and verify terrorist content online but can't do this on our own and commend efforts of the CTED to share capability, resources and know‑how. It is great their industry‑led initiatives like the Global Internet Forum to Counter-Terrorism investing so much and encourage the GIFCT to do this in the future and for the GIFCT to continue to be funded by the tech sector.
     At Tech Against Terrorism, we alert more than 140 platforms and work with a range of stakeholders, governments and tech companies. That is how we are funded. Quite independent and transparent fashion. So we see there are three key challenges. The first, as alluded to by Jennifer just now, is around strategic communications. Historically terrorists have used the Internet as considered in a tactical way. What this means is that the terrorists use of the Internet is considered purely in terms of radicalization and countering this. We would argue they use it for strategic communications purposes.

Most terrorist organisations are looking to have a political effect. They are looking to promote their domestic popularity or to project international standing. Therefore, we think it is paramount to ensure that the way we counter the terrorist's use of the Internet doesn't just think about radicalization and recruitment and excitement but also the political value of that speech.
     If terrorists are able to share their messages on social media, messaging apps on their own websites, this is worth a lot strategically. In the context of hybrid warfare, countering this is of vital importance.
     The second is around infrastructure. We often talk about the tech sector. The tech sector has done an enormous amount, as supported by the GIFCT, but we mustn't forget other sources of terrorist activity online. Terrorists can create their own websites, their own apps, their own technologies. This presents a number of jurisdictional challenges. In particular, the governance level of the Internet.

Can terrorists and should terrorists be allowed to run their own websites. Should ISIS or Al Qaeda buy their own web domain name. These are the issues. We are seeing hundreds set up.

It is extremely difficult working with Internet providers because of ambiguities around jurisdiction. We are finding terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases they are also going back onto the major platforms and proving quite difficult to dislodge in a number of ways as they adapt their techniques and potentially hide their content and better at avoiding automated responses. This is not a criticism but highlights the difficulties of keeping ahead of a sophisticated adversary. But the infrastructure is something I want to bring to the attention of the IGF. Surely, more needs to be done to establish international frameworks where we have designated terrorist organisations buying domain names and buying hosting for their websites.
     The third challenge is about detection and analysis in terrorist content. There is a large amount of terrorist content online. It somewhat paradoxically is hardest to analyse on large platforms, the reason being for data privacy and other perfectly reasonable explanations. Very large platforms are not easy to analyse at scale.
     What this means is analysis of small platforms is easier. Analysis of larger platforms is more difficult. Therefore, we ask for improvements in data access but recognise some of the challenges in terms of data privacy where that is concerned. We commend platforms for doing what they can to share more about their activities and very often comprehensive transparency reports.
     Moving to the end of my intervention here, I certainly recognise the expert's opinion that's being shared about risks associated with AI and generative AI. We would argue generative AI provides a significant opportunity to improve the accuracy and volume of content moderation online and ensure terrorist content can be detected at scale accurately. Accuracy is important because everything we do at tech against terrorism and UNOC and CTED have to counter terrorists but have to ensure fundamental freedoms and Human Rights are upheld.

I remain hopeful generative AI will provide more accurate content moderation decisions can be made and certainly encourage prudent investment in generative AI to detect obviously examples of content emanating from designated terrorist organisations.
     So looking towards 2025, underlying threats are increasing. They are increasing internationally and domestically. Internationally, we have ISSIS, Al Qaeda, Al-Shabaab, many other organisations committing acts of violence in person, off-line. We are seeing increased use involvements in terrorist activities for reasons that are not fully understood.

We are seeing terrorists get better at exploiting grievances regarding geopolitical instability and state failure. Rule of the Internet is becoming more and more important in this. Yet geopolitically, there is a risk that consensus about jurisdiction where the Internet is concerned is going to reduce over time.
     There a very real risk the very time we need increased consensus globally about Internet governance this may be difficult because of geopolitical intentions. Our work will continue. We are a small NGO of around ten people. We are hoping that 24/7 hour capability will help in responding to major terrorist attacks. We will launch Trustmark and services in support of tech sector and governments.
     So concluding my remarks, I would like to emphasize that it is also important to talk about the infrastructure there. In particular, terrorist-operated websites. How can it be right for designated terrorist organisations to have the right to create top‑level domain names. In fighting this, we would ask for improved clarity about jurisdiction and standardization of responses.

We commend the Somali government for doing such good work in the space. And would encourage others to follow the model the Somali government is doing if taking down activity by Al-Shabaab. The Internet's role has never been more critical. As we face challenges we believe responding to terrorist use of Internet will be vital to ensure global stability.

The question is not whether we can stop terrorists using the Internet, but what we can do together in a collaborative way upholding fundamental freedoms to push back against terrorist content and activity online.
     Thank you very much for your attention to these critical matters. I will yield the floor to UN-CTED.

>> JENNIFER BRAMLETTE: Adam, as with intervention from TPB, I will not try to summarize. In the interest of time, I want to make sure Dr. Saltman has a full measure to talk about the work of GIFCT. So Dr. Saltman, the Membership and Programmes Director from Global Internet Forum to Counter Terrorism. You have the floor.

>> ERIN SALTMAN: Thank you. It is wonderful to not repeat points colleagues have made. Thanks for hosting a session on this topic and allow us to dial in virtually, for those of us who can't attend person. I have a bit of FOMO. I would like to talk about us if you don't know us well and leave room for questions.

If you don't know about the Global Internet Forum for counter-terrorism, we are a bit of a unique NGO, a nonprofit, but were, in fact,  founded by tech companies to help companies counter-terrorism but with multi-stakeholderism built into governance and programmatic efforts. Just like terrorism has been a transnational effort, it is a cross‑platform effort. I'll bet few people in their room have just one app on their phone. We should be educating ourselves and looking at normative behaviors and realise bad actors, terrorists, are also cross‑platform.

We needed a safe space for tech to work together. Our efforts are broken down into four buckets. One is cross‑platform tech solutions. I will speak briefly to that. One is about incident response, where increasingly there are off‑line real world attacks and events taking place where perpetrators and accomplices are using online aspects to further harm of their terrorism. We want to further research and knowledge‑sharing as well as information exchange and capacity‑building. That includes work with governments and Civil Society so that knowledge exchange is really holistic. Because signals a tech company is seeing are distinctly different to how law enforcement may be approaching or on the ground how Civil Society is experiencing it.
     Because we share such sensitive information and provide a platform around such time and sensitive issues, we have a membership criteria which is a little unique. You can't just come in the door and work with us. You have to meet a threshold. This was built out in consultation with our independent advisory committee that includes UN-CTED among government and non‑governmental officials and experts, including making sure tech companies have things like an annual transparency report, have a public commitment to UNDP related and guiding principles around Human Rights, clear terms of service and can report terrorist content.

We take for granted on social media largely you can report and flag content, but obviously in other platforms, like terrorist‑operated websites, to Adam's point, or certain game spaces, it may not be intuitive how to flag to a flag or inform a terrorist signal.

Once you become a member of GIFCT, things around cross‑platform tech solutions include a scaled hash‑sharing database where GIFCT and member companies can ingest hashed content of terrorist and violent extremist material when it fits our criteria. There were questions in chat around defining terrorism, which a million PhDs and a million more are needed on this topic.

There is not conclusive agreement. But when we talk to tech companies that were members and have 30‑plus, which include the Microsoft, Amazons, Metaverse and Googles, but also smaller companies like JustPasted, Discord and Twitch, or companies like Zoom, who didn't realise they would need to discuss it.

When we talk about hashing, we begin with list‑based approach. Companies have consensus where they look to UN designation list around lists and groups and look to find common ground. We realise in consultation with Human Rights experts and Civil Society there is a bias in the post 9/11 framework and want to get the neo-Nazi and white supremacist attacks taken in around the world so built in behaviour‑based buckets.

When you look at online content a list is not always clear, a group affiliation or card‑carrying membership or lone terrorism takes place. Our behaviour‑based buckets include hashing attacher manifestos, where that content is justifying a terrorist attack. Or things like branded terrorist and violent extremist content that gets not only at Islamist publications but white supremacy and neo-Nazi and other forms of publications online.

Every year we have an incident response and hash‑sharing form of working group that is multi ‑stakeholder to constantly evaluate and say, where can we go further? Should we expand this taxonomy, if we do would that impede free speech and other concerns. This is an iterative and evolving context. When we think of content we often think of image or video but the TCAP for flagging URLs or attacker manifesto is usually in PDF so have had to evolve.

Incident response critical. After the Christchurch attacks in New Zealand, meant tech companies wanted to stop the viral spread or content in and around an attack. Not every event will have a livestream, but have seen since the Christchurch event.

There are a number of lone actors or planned attacks that do have online aspects that play, such as a livestream, such as the publishing of a manifesto or in Holly, Germany in 2019, PDF publishing of how to 3D-print a gun. These are assets that are around, so we want to hash and share so the framework allows us to increase knowledge‑sharing, make communication with effective governments and law enforcement, where appropriate, and share communication and verified information.
     We have mentioned generative AI in the last few comments. It is also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have happened. How do we quickly verify to stop misleading content. This sort of verification process will be key to future response efforts.
     When we think of adaptation this is where knowledge exchange and active learning and training and capacity‑building between sectors is really critical.

We do fund an academic wing, the Global Network on Extremism and Technology. While this is accessible to everyone, the insights allowed to support with micro-grant's academics and experts around the world that have a finger on the pulse of extremist trends.

This can be anything from AI‑generated content and entire insight series on that, 3D printing and some of the concerns about how that is assisting and aiding terrorism, gaming and adjacent platforms, what signals look like, whether modification of characters or a policy to allow you to name a player Adolf Hitler or not.

This is what companies are looking for, policy guidance around across these sectors. When comes to knowledge exchange, the smallest little trends being shared can really have an amplifier affect for tech companies to understand what harm and threat might look like on their platforms.
     Along with this we really want to understand different parts of the world and how violent extremism and terrorism is manifesting. There are some very broad stroke global trends but when we look how extremists and terrorists use coded language this is very colloquial‑specific. When we see how memes and icons and imagery is used to evade detection this is local context‑specific.

So on top of the technology, which helps us get to scale and speed, we really do need the context that sits around what you might surface and see as a moderator. Even a standard agreed‑upon entity like Islamic State, if I were to have you surface image and a guy in the back of a Toyota, it is hard to know if that is foreign terrorist fighter or man in the back of a Toyota.

The same goes with a lot of forms of violent extremist trends. So when we sit alongside the technological solutions we will still need that human input and cross‑sector knowledge sharing. We have been grateful in our own fundamental advancing of how we think what terrorist content looks like, having CTED and others to consult with and always communicate what we are trying to aim for and how we don't overstep in counter-terrorism efforts to abuse other forms of human rights including freedom of expression.

>> We have to be on the ground. Not everything will be done on Zoom. Fortunately, or unfortunately, we host in different parts of the world and have made sure we are working with ground‑based partners and governments in order to have nuanced dialogues. Not just imparting knowledge about trends online but gaining valuable feedback on what these trends look like in specific regions. We hosted a workplace in Brazil, Latin America and recently in Sweden at the Nordic Democracy Forum. Next year we will work with IIJ in Malta in Sub Saharan Africa. If you want to come see us, to see the two‑way, change and lessons on both sides, I would love to further that.

Lastly, we pick three to five questions we know one government or tech company can answer on their own around topics of counter-terrorism and form working groups. This means that people apply to join a working group, meet a few times a year and we fund the development of outputs that create best practices, involve our own incident response, frameworks for understanding terrorist content.

In the last couple years we have had things around our own hash‑sharing taxonomy but red‑teaming, the harms of AI‑benefit content but blue‑teaming also and how this amazing technology can help with intervention, counter‑narratives, redistributing, translation in language areas a lot of moderators are blind to.

So there are risks and opportunities as we advance this conversation. With that I would love to open it up to more questions. There's so many rabbit holes, both technically, both philosophically, existentially when we think of how to advance counter-terrorism and violent terrorism but only through the multi ‑stakeholder collaborative efforts that we can get at the 360-degree threat and opportunity and where to take the next steps. With that I yield back to Jennifer and UN-CTED, thank you.

>> JENNIFER BRAMLETTE: Thank you, every time I sit with you and Adam, I learn something. We genuinely appreciate your time. I would like to thank everybody in the room as well. There are many other opportunities for things to do. Apparently at 6:00 everything closes. So unfortunately, I will not be able to open this floor up for questions, but I think some of us will be willing to stand in the hallway and chat to answer questions you may have.

And in closing, I do wish to thank you for being here and choosing to spend the last hour of a very busy day with us. It was an honour to have all the speakers here and really the final word on partnerships. It is really through on partnerships and collaborations, leveraging shared knowledge, lessons and good practices, that we shall be able to proactively overcome. And CTED will continue to pursue our work and assist with work of our partners as we move forward with the ICF and counter more terrorism. Thank you very much for being here today.