IGF 2024 - Day 1 - Workshop Room 10 - OF 73 The Need for Regulating Autonomous Weopon Systems

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> OSKAR WUSTINGER: Hello, my name is Oskar Wustinger, the ambassador in Saudi Arabia for Automating Autonomous Weapons. I'm looking forward to presentations and hopefully discussion of our esteemed panelists and participants, so I want to hand over to our moderator,

>> WOLFGANG KLEINWACHTER: Thank you, Ambassador.
     (Audio disturbances)

>> WOLFGANG KLEINWACHTER: We can manage this with technical people, so I will continue to speak ‑‑ the Ambassador for the Austrian Ministry ‑‑ production ‑‑ after that we will handle it, which are here ‑‑ you have the floor.

>> OSKAR WUSTINGER: Good afternoon from Vienna, I would like with a few introductory remarks from an Austrian perspective. AI applications in the military domain are developing with high‑speed holding promises of makings tasks faster. We need to base firmly on principle of digital humanism. Like in civilian sector, (?) Are needed in military sector to ensure Human Rights, centered and ethical and responsible use of AI. However this is lagging behind in the military sector.

Even if this sector is confronted with some of the most sensitive issues about decisions of life and death. This is what we want to address today. We focus on the issue for autonomous weapon system that apply targets without human intervention. These systems raise profound concerns from legal, ethical and security perspective, including meaningful human control to assure proportionality of use, predictability and accountability, right to life and human dignity.
     The risk of proliferation and autonomy arms race. We will hear more from the panel. This has been going on for a decade particularly in group of (?) And Human Rights Council. The broad majority and legal framework is necessary including prohibitions and regulations. Then is not a total ban of autonomous weapons system but possibilities to use technology in a strictly regulated way. Moving to negotiation mandate to work out details has not yet been possible. Political attentions, mistrust among states and potentially flawed confidence and technical solutions in progress despite exigency of issue, preventive systems and window for regulation closing soon.
     This is why Austria has been in activity and hosted Vienna Crossroads this year on Assembly of Autonomous Weapons Systems, that supported majority of states. The globility should not be limited to diplomats and military experts. The issue has broad implications for Human Rights, human security and development and concerns all races and all people.
     From an Austrian perspective, a multistakeholder on this critical issue is therefore important. We welcome contributions of science and academia, the tech sector and industry and broader Civil Society. In this way I hope today's discussion will stimulate a mighty stakeholder discourse. For Austria there is urgency to move from discussion to negotiations on rules and limits on autonomous weapons systems and look forward to the discussion, thank you.

>> WOLFGANG KLEINWACHTER: Thank you, Ambassador, for opening remarks. Now we have Ambassador Ernst Noorman. And from Argentina, Olga Cavalli. And Jimena Viveros from Responsible Artificial Intelligence in the Military Domain from Mexico. And Kevin Whelan, head of UN Office for Amnesty International.  Chris Painter CFCE, and Ram Mohan of Identity Digital and former ICANN member. This is a multiple setting. We have experts from government, visitors from Civil Society and more.
     We know it is nearly ten years and GGE laws, negotiations, which has produced minor ‑‑ have produced a final document. Ambassador Noorman of the Netherlands is Chair and will start giving us an overview. And Ambassador, you have the floor. Five minutes.

>> ERNST NOORMAN: Thank you for the floor, and important views on this topic. First, structured intervention. I use three circles to discuss risks of AI in peace and cybersecurity. First, represent AI broadly, including civilian issues. A new and still‑developing domain that brings opportunities but poses the international communities with all sorts of new challenges.

Within the large circle there is a second, smaller circle. This circle is about AI in the military domain. Questions related to the circle are more specific. What are the implications of the use of AI for the way militaries operate. What kind of rules or measures do we need to make sure militaries use AI in a responsible way.
     Earlier Netherlands and Korea successfully introduced military AI domain in the first committee. It requests report from UN Secretary‑General providing states with platform to exchange perspectives. The resolution was approved by a massive majority of 161 votes, with only three against and 13 abstentions.
     This resolution will initiate a dialogue independent of military stakeholder Re-Aim process, which will continue to serve as incubator for ideas and perspectives from others. The Re-Aim was initiated from Korea and Netherlands on responsible AI in military domain.
     These will complement each other, working towards inclusive discussions on AI in the military domain. Third and final circle, contained within the second circle, is the autonomous weapons systems.

Though the issue first came up in the Human Rights Council in 2013, it was referred to the Convention of Certain Conventional Weapons, CCW, given its relevance to disarmament. The CCW has played a critical role in addressing emerging threats, including prohibitions and regulations on various weapons systems. The CCW then established a group of governmental experts on lethal autonomous weapons systems, the GGE laws, for short, in 2016.
     The GGE counts now days 127 partners, 127 countries. Every country and NGOs can attend as observers and do also. Some would say it is a very inclusive process.  
     My colleague, our Dutch Ambassador, Robert den Bosch, Chairs on the laws through 2026. One of the strengths of CGE is, it has large military strength included. This can make it difficult, but I think when we agree on regulations, it will be much more effective. As final point, it remains important to note group is increasingly working against time.

What started as a concern of the future is, today, an urgent, pressing issue, as weapons systems capable of limited or no human intervention are rapidly being developed and deployed in modern battlefields.

   It falls on the international communities, on states and other stakeholders to garner issues. Interest by the community is evident, as shown by regional and international conferences and UN resolutions, which highlight growing global engagement. 

     Coming back to the question, is this an Oppenheimer moment? Can we learn something from the nuclear arms race? I'm wary of drawing historical parallels. The challenges are enormous as these types of weapons systems have potential to transform modern warfare but differ. A lot of important work is happening. We must collaborate constructively to address the issue and treat it with the urgency it demands. Thank you very much.

    >> WOLFGANG KLEINWACHTER: Thank you, discussion about the Oppenheimer moment, my understanding, it is also a challenge to the researchers and Academics to be aware about responsibility, what they are doing. Two days ago, we had Nobel Prize 70 in Stockholm, where winner also raised concerns and safety now.

    This can bring a moment where we are really at risk and, so far, we should make parallels working but aware in service and sometimes we are coming back on higher level in this situation where we have been already. I just was informed meanwhile Vint Cerf is expected to give opening remarks is online. I'm happy, Vint, you are able to make it. I think it is early in the United States ‑‑ you have the floor now, thank you very much.

>> VINT CERF: You are very kind, thank you so much. As it happens, my day began at 1:00 a.m. in Washington D.C., so I have been up for a while. My previous session didn't end timely. I have been thrashed around, so I apologise for my delay.
     Let me add a little to what has already been discussed. First of all, some of you know about an organisation called the Digital Leaf Foundation, a U.S./U.K. Organisation. Among the various things are discussions on important policy like this one. Concern for autonomous weapons.

We spent a day and a half looking at the nuclear deterrent practices and tried to ask whether they had any ‑‑ whether they would inform any of our practices with regard to cybersecurity. The conclusion is they are different, as the previous speaker pointed out.

For one thing, proliferation has already happened. AI is essentially everywhere. To make matters more complicated, AI is not necessarily very reliable. My biggest worry about trying to establish policy with regard to autonomous weapons or other uses of AI is we do not know how to contain artificially intelligent agents to prevent them from executing functions that might turn out to be of considerable hazard.
     So while we can try to establish policy and objectives to achieve that limitation, I think the previous speaker implied that there was a great deal of work to be done in the technical community to establish bounds on the behaviour of these autonomous agents.

So I think we can't really succeed in making policy unless we also have the technology available to enforce it. Therefore there is still a lot of work to be done. That is as much as I think I need to disturb you with this morning, but thank you so much for the opportunity to intervene.

>> OSKAR WUSTINGER: Thank you, Vint. I hope you can stay and continue the discussion, because you are an expert in this field.

And as a member of various commissions, she is now the Commissioner on Global Commission on Responsible AI in Military Domain -- message came out from the Netherlands, but involved in the HLAB in United Nations Secretary‑General Committee and working as an OAI expert. And happy we have Jimena Viveros from Mexico. If you could comment on what we have heard already and to explain ‑‑ what we are doing in this commission.

>> JIMENA VIVEROS: Hello. I hope you can all hear me. Perfect. Well, first of all, I would like to thank our Austrian and Dutch friends for championing such important initiatives. Also the South Koreans, which are not here, I think, but are also part of this big international solution to put this on the table and working with CGE laws.
     For broader perspective for those not familiar with use of AI in military domain, Wolfgang said this was created by Netherlands and South Korea. 18 commissioners and around 40 persons, I think. We have a mandate to come up with some recommendations by the middle or end of next year regarding this.
     So also I was, as Wolfgang mentioned, part of the United Nations Secretary‑General's High Body on AI, where we had issue whether or not to include military domain in our recommendations.

For those who read the report, which I hope is everyone, we did include it in the end, but it was a struggle, so the reason why it was included. I led the engagements and consultations on the peace and security, as also I am leading the work stream on peace and security at Re-Aim.
     The arguments I raise and issues are similar but might seem different in context. I always say that these technologies cannot only be looked through the military lens. That is why I call it the peace and security spectrum. There are so many non‑state actors using this. And even state actors, which are civilian, like law enforcement or border controls. And non‑state actors, the immediate thought is terrorism, but we have organised crime and mercenaries increasing in the political landscape we are looking at. It is the exact same technology that is being used.
     What we need to come up with are guidelines in the development phase to have responsible innovation. We don't want to hinder innovation because there are also good applications that can come out of AI in the peace and security domain, when used responsibly, when developed responsibly. But that is the key.

When we talk about all these governance initiatives we speak about abstract terms, responsible AI, ethical AI, safe AI, but the problem is, when we bring it down, you know, to the operator, to the developer, to the user, to the consumer, no one really knows what obligations that derives. Those are translations we need to make. Kind of make it operational.

We have a huge problem, which is going to be implementation and enforcement. That is even going to be a bigger one. So that is why we absolutely need a binding treaty as Secretary‑General and ICR called for for 2026 with the two‑tier approach, based on whether or not the systems can comply with IHL. Then if they cannot, those would be forbidden; and those who can, would be regulated accordingly. But this is extremely necessary. But then we also need a centralized authority that would have the mandate to do the oversights, as we do, for example, with the Energy Agency.
     I'm also a little cautious to call this the Oppenheimer moment, because AI is a very different monster than nuclear. Because when ‑‑ even since its origins, from the splitting of the atom, it was immediately weaponized. There was the whole veil of secrecy with the Manhattan Project and everything for years. With the Cold War and arms race, no one used it because it was like a mutually assured destruction.

With AI we don't have the collective conscience, but with origins used simultaneously in civilian and military, so weaponized and nonweaponized, so makes it harder to control. You have Open Source, which makes it harder to control. It is cheaper and resources to create and do harm with it are so much more accessible and less traceable than, say, like a uranium plant. So makes it easier for non‑state actors or malicious or rogue or nefarious actors to get hold and create big harm.
     Also when you convert it into weapons of mass destruction with nuclear chemical or bio or swarm drones, could have potential of being a weapon of mass destruction in itself. That is something we should also keep in mind. So we have very ‑‑ added to the cyberspace and all the different attacks on critical infrastructure and whole destabilization effect that AI has in military and peace and security domains is enormous.
     A big problem we definitely need to address, and keep in mind every single forum, the disproportionate this will have on global south. These weapons are not going to be used in the global north against the global north. These are normally weapons that will be affecting the global south. The problem is, there's no capacity response as of yet to counter this type of threats. This is a big, big, big issue we should all be mindful of.
     So all of these initiatives ‑‑ I mean even when we are talking about the civilian ones, so the OCD, where we only look at civilian domains, but there is a monitoring of incidents, which I think could be useful also for other peace and security domains. Because the lack of data is also a risk. So we also know the civilian data or data collected by civilian sources is being then used by other types of military or security agencies.
     So that is also a very big problem we should all be mindful of. So that is basically the landscape of risks and threats I see as most urgent. I will leave it there to keep mindful of time.

>> WOLFGANG KLEINWACHTER: Thank you, Jimena. You make a good point. The nuclear bombs were ‑‑ used but AI weapons are produced and used. ‑‑ needs more awareness so means all discussions are taking place, more or less, in small expert circles, so there is ‑‑ the level of public awareness about this flow so it means much more public awareness. And it is the first discussion, and one of executive of this discussion, is raise level of awareness. Awareness leads to education.

We are ‑‑ have here Olga Cavalli, the Dean of Defence, Ministry of Defence in Argentina. My question to Olga is, you know, how do you prepare the soldiers and generals of tomorrow for this new situation. Thank you, Olga.

>> OLGA CAVALLI: Thank you, Wolfgang. Thank you for inviting me. This is a very interesting question. I like very much the perspective that our colleague from Mexico brought. Is what happens with the global south. So our developing economies or global south, and I can bring perspectives from Latin America. Latin American countries are engaged in different discussions and negotiations related with autonomous weapons. We have been active for more than ten years in different spaces, saying it is a concern for our countries, for our region. The challenge is always for developing economies in how we approach this technology.

In general, we don't produce this technology. We use it. It is expensive to buy. And imagine from a capacity‑building perspective, how can you train our soldiers and our civilians. I like very much your perspective. It is not only about military issues; it is about other uses, legally or illegally of these weapons. How do you approach the technology that is so far from ‑‑ needs developed and reachable from affordability perspective. So it is extremely expensive. You don't develop it. It is extremely hard to buy. How do you approach this training?
     So we have been working from our university in different collaborations with universities from developed economies from the United States, Europe and other countries. So we think that through collaboration in between different teaching spaces, that is the way that our countries can approach and learn about these technologies. For you to have an idea the Minister called me for this position because of my training in technology. And we opened a new career in cyberdefence. We had more than 1,000 applications in one month.
     So what the authorities this morning were expressing about the need for training in cybersecurity and cyberdefence is in other countries, so these new careers are highly demanded. Our challenge is, how are we going to train these people? For example, things like autonomous weapons. It is a huge challenge for us.
     So I think that a way is cooperation with our universities, with other governments. We are working on that. Our President is very keen in going to abroad and having these agreements, so I think this is the way. At the same time, in general, we think that global treaty could be very useful tool. Because what happens usually, these regulations are developed in different spaces and with different focuses. And usually the global south is following up but not perhaps so much into the development of the regulations. So a global agreement could be ideal. As usual, it is difficult to achieve. So I will stop here and I will continue contributing, thank you.

>> WOLFGANG KLEINWACHTER: Thank you Olga, we used to grow until 2026 so let's hope for the best. But ‑‑ (audio disturbance)

>> WOLFGANG KLEINWACHTER: Don't know why it happens, goes in and out.

>> Like this. Use it like this.

>> WOLFGANG KLEINWACHTER: Probably it is my mouse. I have no idea.

>> You need to hold it like this.

>> WOLFGANG KLEINWACHTER: Capacity‑building was responsibility of Chris Painter for many years. He is well‑known in this community and the first U.S.  Cyber Ambassador. I hope he is online. Can you hear us? And you have the floor.

>> CHRIS PAINTER: I can hear you, hopefully you can hear me. You hear me?

>> WOLFGANG KLEINWACHTER: Yes, we can hear you.

>> CHRIS PAINTER: It is great to be here, sadly, virtually. Wish I was there in person. This debate is not new, made more urgent by reality of AI. Folks that know me well, I'm a devotee of cyber movies. Back to 1970, the first movie where computers took over the world, "Colossus: The Forbin Project". Society thought it would be more rational, take emotions out, they put a computer in charge, the Soviets did the same. The two talked, became self‑aware and took away civil liberties to protect humankind. This is not a new issue. It's been dramatized, certainly in “Terminator” and other places, it's been made real by the emergence of AI.It is unsettling where the technology is, what it is and going but Wolfgang you said there is urgency because it is fast‑evolving.
     I draw parallels to the idea of cyber and cybersecurity. As many folks know there's been debate many years now on the cyber community about cyber attacks, cyber capabilities, cyber capability offensive and defensive capabilities as moving them to an autonomous level, take the man out of the middle. The argument for that many years has been that cyber, quote, moves at the light of speed.

If your attackers can hit you now, then artificial intelligence can hit you more often and with such lightning‑quickness and adaptability, you need an autonomous system. Generally, it is not clear. As others have said, AI spans the entire landscape, from cyber tools to drones to physical weapons.
     But in the cyber tools, and more generally, the escalation paths, is still not really clear how these potential capabilities can be used, how they will work. If you have AI working against AI, then you have even a greater chance of an escalation path that gets out of control. I think some of the things the panel's mentioned. So that is a real concern. Even with that, I don't think we have made a lot of progress kind of happening, when you would have automated responses in cyber.

You know, that is still a live debate within countries, between countries. I don't think we have seen a huge amount of progress. Of course, that is more lethal or the likelihood would be less lethal. There may be lethal cases than using it in more physical boundaries, as we have been talking about today.
     That is one concern. Another is the cybersecurity implications of attacks on these autonomous weapons systems. Like everything else, if they are connected, they are essentially insecure at some level. If they are not there are still ways to get into them. Even leaving aside all the uncertainties how AI works, and it is not really being as secure, as unbiassed as people think it is, the cybersecurity implications ‑‑ this has been true for weapons systems more generally ‑‑ has been a huge concern because you could have an adversary breaking in to these weapons, changing the artificial intelligence parameters, when they are used and creates huge risk to peace and security.
     Finally, as pointed out by others, AI is not this unbiassed system that is out there; it depends on training, depends on how you educate it and what the parameters are. So the thought it could be unbiassed itself is a problem. What that leads, I think, to is, you know, what are the solutions here.
     As was pointed out the CGE on these topics has been long‑standing but not made huge amounts of progress in terms of moving toward what many people think we need, which is a treaty. I'm less ‑‑ I guess I'm less optimistic a treaty can be reached. I base that on what I have seen in other areas, where the geopolitical differences are ones where I think there is unlikely to be agreement.
     The other issue with respect to this, because this is such a quickly evolving field, pointed out by Olga and others, we don't know the implications of how AI can be used or not. How you can cabinet, as Vint said, the technical requirements. That reaching a treaty in two years, I think, is going to be very difficult without basic understanding of where the technology is going and that technology is continuing to move fast.
     Then the question is, what kind of things might we do. I think education is critically important, bringing in stakeholders, as this discussion is important. I think addressing the AI divide, as Olga put it, with a lot of the global south and making sure there is more capacity‑building in this area, not just in this area but attendant areas too, like cybersecurity and AI awareness. Calling out use cases, where you say where we have seen these technologies used in autonomous weapons and implications are. So it is made more real, is really important.
     Ultimately, before you get to a treaty, calling out what is good, what is bad, what norms of behaviour, like in cyberspace, but building differently, building toward a treaty. I wish we could move quicker, but have a feeling, because of uncertainty of technological, plus geopolitical issues, that is going to be difficult in the short term. And I think that is exacerbated by oversight issues, one of the speakers raised, which I think are difficult too.

I expect we have to move more incrementally, but I expect part is the education of both general populous but even the people who work within the UN and governments about what implications are more generally. With that, I will stop.

>> WOLFGANG KLEINWACHTER: Thank you, Chris. Thank you also for putting a little bit of water into the wine. Sometimes it is good to be more realistic than too optimistic. Anyhow, as you said, all the stakeholders have to be involved in the development of the framework for the future. You need technical experts.

Ram Mohan, growing up in India, is for many years a technical expert in ICANN community and ICANN board member and now CSO from Identity Digital and represents also the private sectors, so there will be no weapon autonomy without visitors. Ram, what is your approach?

>> RAM MOHAN: Thank you. This is where you need a village to use the microphones. So I wanted to focus on objective information and data as a basis for policy making. I hear discussions how to solve problems and hear ideas such as guaranteeing human control with the way to achieve it by legal means. I want to introduce some of the risks and threats that come in evolution of software engineering, because I think we have to understand the software basic and engineering basis before we get to the legal and policy areas. AI's own evolution means that currently‑known methods ‑‑ in software engineering of testing, quality assurance and validation are either incomplete or insufficient.

Many weapons systems in the conventional area, many weapons systems demand a model of zero defects, right. So there is a zero defect model that is expected. Now while the concept of a zero‑defect AI system is appealing, it is important to recognise some of the inherent limitations that exist there.
     If you look at some of the key challenges, one is data quality and bias. As Chris Painter was saying, AI systems learn from the data that they are trained on, but we also know that all data is biassed and all data is inherently inaccurate. That will strongly influence the outputs from the AI systems.
     The second piece is algorithm limitations. We know current AI algorithms are susceptible to complex or ambiguous situations. When comes to weapons systems, that is almost all of the definition. All systems there are complex and ambiguous and with a lot of changing parameters in there.

The third component in there is unforseen circumstances. So AI systems are likely to struggle to understand unexpected inputs or situations that deviate from their training data, right.
     And what we have been talking about is in those cases, let's make sure there is human oversight. But human oversight, when there is not an understanding of how the system ‑‑ the AI system arrived at the conclusion it did, the human oversight then defaults to merely intuition. That may not be sufficient when we are talking about human scale problems rather than just technology issues.
     So when you are talking about high‑consequence decisions driven by AI, we also understand that AI systems that are learning from prior datasets, they can create novel behaviours that are neither predictable nor foreseeable.
     And this is exacerbated in the Edge cases. One of the interesting and evolving characteristics I have been studying is the relative ease by which you can gaolbreak AI‑based systems. Gaol‑breaking is often a matter of expert prompt engineering. For those who don't know what prompt engineering is, it is the science ‑‑ some call it an art but I think it is more of a science of creating effective prompts that guide the AI model to generate desired outputs, all right.
     So you may be able to programme guidelines, laws, treaties into an AI model and say, you must conform to all these guardrails. But I think that smart prompt engineering will likely be able to overcome those kinds of guardrails that exist.
     So there is a great deal of evolution that is happening in that area. Good prompt engineering can help the AI system perhaps learn to build guardrails itself but seems like that kind of prompt engineering can result in not only unintended consequences but consequences that become part of the training dataset for the next cycle of the LLM.
     When that is not documented or not understandable you are I think going to have a system that compounds the original deviation from the norm, right.
     So I therefore have some concerns about a discussion that starts with the premise human control is a way or good way to help solve what is evolving here. Because you can establish strong ethical guidelines, create international regulations and build robust safety measures, but if you look at the software engineering underneath these systems, data validation, the fact that today's systems, it is very hard to create zero defect model combined with the enormous capability of smart prompt engineering to goal-break these systems, makes me think we have to spend quite a bit more time and research understanding how these systems work, have a lot of simulations of those kinds of systems first. Then start to build some global frameworks and norms of what safety should be before we can think about a treaty or internal agreement that makes sense. Because when the foundational principles are not fully characterized, you end up ‑‑ if you start to work on law or treaties, you may find that the unintended consequences may be far greater than the good that was intended.

>>WOLFGANG KLEINWACHTER: I think this is a very interesting additional aspect. If I understand ‑‑ I don't know.

>> Yes.

>> WOLFGANG KLEINWACHTER: Yeah, better? Okay. Good. If I understand you, you know, it is really a problem that even if you have human control, that the underlying technology overstretches the capacity of the human wisdom control. So it is just, you know, on paper but the reality could be moved in a different reality. This is an issue with Civil Societies and we have an NGO called Campaign To Stop Killer Robots, which is active.

Kevin, you represent Amnesty International, which you have discussed this. What is the Civil Society's perspective, if you have all these experts from diplomacy, technology, business, what do you think about from a Civil Society perspective? Then we have time enough for two or three questions from the floor. Please prepare your questions.

>> KEVIN WHELAN: Thank you. Good afternoon, everyone. It is a pleasure to be here and to speak on behalf of Amnesty International. It is a challenge to be the ninth or tenth speaker on a panel right after lunch, so I will try to be as concise as possible. It is great because I think it gives me an opportunity to respond to some of the things the panelists have already said.
     I speak on behalf of Amnesty International, that is part of Society groups. From our perspective I think we view challenges and risks that come from autonomous weapons systems as eminent and as significant. It is for that we believe the international community should clarify and strengthen existing international humanitarian and human rights law through a legally binding instrument. Through an instrument that would do at least three things: One, prohibit the development, production, use and trade in systems, which by their nature cannot be used with meaningful human control over the use of force. I hear what Rohan is saying. From our perspective we are assuming as a legal standard, not necessarily a technical standard, but perhaps we can discuss that in more detail.
     The prohibition would extend to systems that are designed to be triggered by the presence of humans, or that use human characteristics for target profiles. The so‑called anti‑personnel autonomous weapons systems. So in addition, a regulation of use of all other autonomous weapons systems. On top of that, a positive obligation to maintain meaningful human control over the use of force.
     Now as some of the speakers have already mentioned, the use of autonomous weapons systems in armed conflict has been at the center of debate, much of which has taken place in the CCW. As Jimena and Olga and others had talked about, this has dimensions broader than armed conflict and broader than the CCW. Not just an issue of IHL or weapons law, but also of human rights. So I wanted to use a bit of time to focus on the dangers in relation to the law enforcement context. Where the use of force is governed by a different threshold from that which applies in armed conflict.
     So from our perspective, the use of autonomous weapons systems in this context would be inherently unlawful, as the international law and standards governing use of force and policing rely on nuanced and iterative human judgement. Like Rohan said, some of the challenges in dealing with complexity. We are talking about an exceedingly complex decision that should not be delegated.

A law enforcement officer must continually assess a given situation in order to, if possible, avoid or minimise the use of force.
     I'm not saying legal determinations in context of armed conflict are simple; what I am saying is legal determinations in terms of law enforcement context are exceedingly complex. Then if there is a system to be delegated, given the complexity of this ‑‑ issues we need to address, the system would have to be so complex as to render the system outside of meaningful human control. In other words, a machine so sophisticated to attempt to adapt to subtle environmental cues would make the machines inherently unpredictable, so we come back to the notion of how do you evaluate that with something other than just intuition.
     You know, this becomes a significant issue in terms of accountability, because it would blur the lines of responsibility and accountability. It would undermine the right to remedy. The last thing I wanted to point out is that the use of autonomous weapons system in law enforcement would be dehumanizing, would violate the right to dignity, undermine principles of Human Rights compliant policing.

I think one of the panelists already made the discussion addressed the issue of bias in algorithms and systems. You know, there are risks of systematic errors and bias. In algorithms, in autonomous systems. We know we have documented complex systems could have biased results based on biased data. Facial recognition can lead to profiling on race, ethnicity, gender, origin, other characteristics, which are basis of discrimination. Imagine adding lethality as a component to that system.
     This is one of the reasons, stepping back, we see value in the process at the General Assembly because has a culture broader than CCW context, thank you.

>> WOLFGANG KLEINWACHTER: Thank you very much. I think we have time for one or two questions. So you need a microphone to ask the question?

>> Yeah, I hope you can hear me. So thank you for this wonderful panel. I think this is a very important issue. My name is Hiram from Indco Justice, part of the Campaign To Stop Killer Robots. We had a member go to meeting in Geneva, and important to see only two data scientists, two from the technical community. Seems like in rolls on relevant technical issues are overlooked, expecting them to be reliable and predictable, kind of a game. I think the question is, what are bottlenecks in understanding for diplomats or government bodies to work towards an international treaty towards like abandoning autonomous systems or abandoning autonomous systems.

>> WOLFGANG KLEINWACHTER: Ambassador, can you take the questions?

>> Is an organisation within the Stop Killer laws.

>> Thank you, my ambition as Chair is to include as many voices at the table. That is why he's been actively encouraging to involve stakeholders, other organisations. Only the signatory and observing countries but also academics and NGOs like Amnesty International and involved in ICRC to get a full picture and involve everyone. We are trying as well to reach some agreement amongst countries. I understand from your contribution limitations, but same time we feel urgent need also to be ambitious. We have been ambitious with Re-Aim in taming this resolution to put on table. I understand from contributions it will be difficult to reach agreement. That is why ‑‑ actually, any agreement in this area. Without ambition, you won't reach anything.

>> WOLFGANG KLEINWACHTER: Thank you very much. We have two questions online, then we have another here in the room. So could we hear the first question online.

>> Yes, hello, can you hear me?

>> WOLFGANG KLEINWACHTER: Yes, we can hear you.

>> Yes, hi. This is Milton Mueller from Georgia Tech. On the Oppenheimer problem, one of the main problems facing AI governance, is the belief among several AI developers is they have put us on the path of an autonomous super‑intelligence that is capability and could result in destruction of humanity. Two and a half or so years ago we had this massive panic and had Future of Life Institution that we should stop all development of AI. It was those people who believed they had passed an Oppenheimer moment. That they had discovered a power so awesome, comparable to Oppenheimer's weaponization of atomic fission.
     Those of us who have investigated this problem now know this is a myth. This idea of a super intelligence that is imminent and have power to destroy all of human civilization is not a realistic thing. I hope ‑‑ I think your discussion of the lethalisation of autonomous weapons has been more grounded in reality. I want to know if we are not headed towards the sort of revival of the myth of a super intelligence that is autonomous and capable of destroying humanity.

>> WOLFGANG KLEINWACHTER: First let's take the second question and first reply to Milton.

>> Hello.

>> See you, we can hear you.

>> So these comments about AI are very much shared by business leaders. There was a recent point of view that another region is on the race to develop AI and the slowdown or withdraw from race, they will win. So they will stay on race, continue developing without safeguards. After we win the race, we worry about the safeguards. Shouldn't instead the governments and all actors get into the same room and try to achieve a solution, either UN or in a conference center like Potsdam or historical place?

>> WOLFGANG KLEINWACHTER: Thank you. My proposal is, we give the room the possibility, then we have a final round. So you need a microphone. One, two, three, four. Then we close the queue and have final on participant, and Ambassador will make a final remark.

>> Thank you. I had a lightning session where the military drone, which cost $500, was able to take out a 10 million tank. This is technology which is now actually used and there are lots of attempts and successful attempts to implement AI on battlefield, from swarm drones to mothership connecting to the IQ or a Starlink antenna glued to the starship flying high in the skies.
     What I have to say is, I have been thinking a lot about how can we protect our future from AI going rogue in style in some way. It is not a battle between humans and humans, but humans and some robots, basically.
     I think we are kind of wrong way in design of our attempts to regulate AI because you cannot regulate the development of AI. It is super‑rapid and I actually agree with you and hear you out and so on. What we could regret is finding it weapons. The problem of AI getting the style is problem of AI intentionally on its own way, pulling the trigger, pulling the digital trigger of a pistol or intercontinental ballistic missile.
     Not just AI but limited by something inability to produce weapon, which is equipped with a digital trigger, which can be used by AI, you can protect yourself. It may sound weird but a human should only be killed by God or another human. Should not be any robot with this trigger‑pulling, thank you.

>> WOLFGANG KLEINWACHTER: Okay. Thank you very much. You need a mic, take this one.

>> Can you hear me? Ottom Prij. I was on panel related to public and private sector corporation, and this was the subject. Since this was the subject there, has been an ongoing theme over the fact legislation is consistently in a position where it is falling further and further behind and difficult to continue keeping up.
     How would you comment on the fact that while we are still here trying to discuss conceptual ideas around the way to control these systems, there are private sector companies such as Helsing or Anduril already deploying these in live conflicts. They are, in a way, superseding discussion just by sheer fact they are actually using these systems already. What do you see as a solution to these problems?

>> WOLFGANG KLEINWACHTER: Okay, thank you.

>> All right, thank you very much. My name is Kunli Adari. President of UN Society Chapter and researcher. And interestingly, I wrote a book recently I published on the (?) Platform based on this subject matter that is artificial generative intelligence terrorism. That is matter is what draw me to this session, because I really want to get to know more about what has been discussed. I listen to one of our panelists,

    I mean, the perspective we drew in was so interesting to me because I was actually looking, in my own paper, looking at utility (?) Okay fine, let's look how we can, you know, look at the use of AI, right, in the moral perspective. But then I discovered when I was looking at my paper and of course I set up like focus group of experts that speaks to those issues, I discovered that, yeah, that is going to be a bit tricky because I have to go to extent of defining what is moral, you know, which of course I know all of us are going to add on.

    To add utilitarianism, looking at the maximum effective use in terms of the good use, yeah, I considered, okay. This is a good use. Another person would say no, that is not a good use. So I discovered there are so many perspectives. When I had the perspective of one of our panelists that said okay, was now need to look at issue of data. All data is inherently inaccurate. That now connected to, you know, the utilitarianism and, you know, the ontology. I was thinking this is time to start thinking about these issues because these have come and nothing to do about it. The best thing is take it to the next level. The issue of treaty definitely come but I think we need to start how we can ‑‑ IGF is just a forum where we discuss these issue, elicit these ideas, right, but no binding treaty.
     So I think we should look how to take to the next level, maybe a plenipot, plenipotentiary where they have development arms where they discuss the issues, something coming out of it to take it in a way it will be binding and in force. And in plenipot, is there a forum where we can discuss issue of standardization when comes to AI, thank you very much.

>> WOLFGANG KLEINWACHTER: Thank you. We have a final question here. Then we have a final round around the table. This start -- we start with (?) Not too long, too much. Can you introduce yourself and ask your question.

>> Hi, I'm Lida Lindsey, a local distil policy expert. My question was mostly covered. But we are seeing deployment of autonomous decision-making in war. A lot is piloted and demonstrated as like best practice around the world by these private companies. So I wonder what is the short‑term solution. Something that we can do today. We can campaign for today. In order just to make sure that we could limit the impact of autonomous decision‑making in war.

>> A lot of good questions. I propose that you pick just what you want to say from your field of expertise, Kevin then Jimena and go around the table.

>> KEVIN WHELAN: Great thank you. Maybe a couple of points maybe about the complexity of technology. The challenge is in fully understanding it. I'm not a technology expert, but I don't think you or any of us need to understand the technology to understand what's at stake. I am not saying that you can necessarily create a system that is subject to meaningful human control. What I am saying is if you cannot have meaningful control over a weapons system, that is a system that should not be deployed.
     Another point I wanted to think about, I think it's been picked up around a number of questions, how to reconcile the argument that these are complex systems, we need to wait to see how they are developed with the fact these systems are already being deployed in multiple conflicts. So that is absolutely why we believe there is urgency. What can we do? I mean, we fully support the call of the Secretary‑General and ICRC to negotiate a binding treaty by 2026.
     So I think what you can do is campaign on that be half, right. Make your voices heard, talk about the urgency of the situation, thank you.

>> Okay, thank you.

>> JIMENA VIVEROS: Hi. So I think a little about everything is ‑‑ can you hear me? Okay. The fact that, as I said, AI is a new monster and in the security domain is an even bigger monster so we need to reimagine what governance looks like. The traditional models of governance we have seen so far have proven to not be the most adequate ones. We need multi ‑disciplinary approaches and need engagement with industry, of course to promote and to kind of guarantee there is going to be transparency and some type of cooperation for enforcement. Otherwise, we are just drafting dead paper, as we would say. We need definitely capacity‑building.

As I say, capacity response, especially from the global south. In order to make that happen, I mean, I think everyone from wherever we are standing in our trenches, we can speak to our policy makers, demand this is a thing so it can become binding. Otherwise we will be stuck in the same place. I do believe it is very important to talk about standards, which was raised, because that is the only way we actually, in a measurable way, verify the type of guardrails. And also how to not override them. So this is very critical in the way forward. That is why I mean we need to reimagine the way governance for this technology needs to happen. We need to do it very fast and very agilely, because we are way behind on where we should be.
     So it is terrible these systems ‑‑ are being field‑tested already, live. So there's no other phase in between and they are just deployed. Because then we are seeing the consequences all around the world. Again, the global south is the one that is having the worst part of it.

>> WOLFGANG KLEINWACHTER: Thank you. We are pushed out now of the room, so Ram, you have one minute to make a final comment. And if Chris wants to say something, fine.

>> RAM MOHAN: Thank you, Wolfgang. I will be brief. We should recognise there are no unbiased and accurate AI decisions. We need to recognise there are dependencies and I think that the important thing here is to build risk management frameworks that mitigate both known and unknown risks that are accelerated by machine learning systems.

>> WOLFGANG KLEINWACHTER: Mr. Ambassador.

>> I understand we are behind. Big concern for us all, but doesn't excuse us to working hard towards an agreement. So we are fully committed as chair of the CGE to work hard. We are happy with the informal forum in New York. We will be, as a Chair, braving other countries and other newer community on development and work of the CGE but we will be really keep on working and trying to achieve result in 2026 given us to task and feel responsible for that, so we are working towards a legally binding instrument to prohibit those autonomous system that cannot be used in accordance with international law and regulate other autonomous weapons, a concept supported broadly by many states. It is my hope we can ultimately enshrine through new protocol in CCW, thank you.

>> So Wolfgang, just ‑‑

>> OLGA CAVALLI: Especially what Ram said, big challenges for universities and everywhere. Have a multi‑disciplinary perspective. This is challenging for universities because each faculty is very much focused. Hearing you, I think we really have to have a broad understanding of technology. Thank you for inviting me.

>> CHRIS PAINTER: So finally on Milton's point, what gives me hope is we are talking about use cases, not just the specter of AI or some giant monster but looking how it applies to autonomous weapons. I agree with comment made about focusing on several levels including management formers. Autonomous devices are not new, we have been talking about this 30 years, AI adds complexity. A lot use AI as a talisman that is supposed to mean something and getting down to brass tacks on use cases is important. I don't think we are in the same loop we were before. And locking people in a room and hoping they come up with agreement, I agree with Ernst, if you don't have ambition, it is not going to be anything. I think it will not result in the short‑term.

On capacity‑building, as Olga said, I think that is critical to awareness, critical to not just the global south and more generally. I would say the global forum on cyber expertise as capacity‑building platform has created working group on emerging technologies and AI and applying cybersecurity context but also covers aspects we talked about today, so I think capacity‑building is another practical thing we can do as we are talking about what the constraints are, what the treaties are, et cetera, what norms are as we plan the technology. So also thank you for having me here.

>> WOLFGANG KLEINWACHTER: Thank you, Chris. Final word comes from Ambassador Gregor Schusterschitz.

>> GREGOR SCHUSTERSCHITZ: Thank you, few sentences that summarize the discussion we had today. I think it was very good to have various experts from various fields that show risks and consequences that unregulated autonomous weapons would have. This time pressure is this would be called the Oppenheimer moment. We need to keep up with developments and find regulation. I think that was clear for everyone. But we need to have very smart and targeted that keeps pace with the development. This is not first area where we have rapid technological development and need to regulate to a certain extent. Of course, we require multi‑stakeholder approach here. We cannot trust only diplomats and military experts in the room that is trying to regulate but we need scientists, we need software engineers and Civil Society to find a way to regulate autonomous weapons that is also flexible for future developments, thank you.

>> WOLFGANG KLEINWACHTER: That is start of beginning, and see you on next round of ‑‑ informal consultation in New York.