IGF 2024 - Day 3 - Press Room - [Judiciary Engagement Session] AI in the Judiciary Usage, Regulation and Ethical Concerns

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:  Very good afternoon, ladies and gentlemen.  Welcome to this parliamentary track session that has been organized by IGF, and we are going to have a very interesting discussion, so I will first introduce my speakers here, and then give you an update.  I'll just say in the form of storytelling what this session is all about.  It is about, imagine when you move to a new town and you buy a new house.  You move it, you realize that it does not fit your preferences.  The window is too small, the nails too old, the chairs, the furniture doesn't work.  This is what we are doing, having moved from the physical world to the digital world.  In the judiciary, we have to do quite a lot of volunteering.  We must change some of our furniture, we must change some of our values.  However, we cannot do this by throwing away what is expected.

So, in the light of the principle on judicial conduct, we're going to discuss the effects of artificial intelligence on the core values of judges.

Before you, I tell you what those core values, I will introduce the team.  These are the engineers telling us which furniture to throw away, which window to bored, outdated glasses to take out.  So, by the time judges sit down and use artificial intelligence, they know the work that has been done by this fantastic engineers with me.  And we have Alexander, a lawyer and practicing ‑‑ why don't I give each one a minute to introduce themselves.  Please.

>> Hello, dear audience, nice to meet you.  Thank you for inviting me on the session.  My name is Rita and I'm working as well in Russia.  My resolution and communication in digital technology ‑‑ I move I guess ‑‑

>> So, I came from Serbia.  I'm present and University professor at the University formally, so that's it.  I'm covering the sovereignty in artificial intelligence, so I will make some notes about it during this session.  Thank you.

>> Can you hear me?  I'm sorry.  Hello.  My name is Fatel, the Chairman and CEO of the MLI group.  Most of you may not recall where my journey was, but I was around when time in the early days of the Internet, so in 2004 when the Secretary‑General wanted to start a conversation on something called Internet and governance, I was one of the experts who was invited and it led to the creation of IGF.  Plus, I'm a book author.  I had a couple of international best sellers.  And I also tell people when they ask what I do, I tell them I'm a babysitter.

>> Hi, everyone.  Good to meet you.  I'm Ada, a lawyer by training, I passed the bar in New York State so hopefully I can bring the common law perspective to this.  Now I'm center for Europe seen analysis, a DC‑based think tank.

>> MODERATOR:  Thank you very much to my engineers who are going to assist us.  I will throw the ball to the general public to assist us in getting to know.  On the top of the list, with my colleague Roman for the IGF, and we will put accordingly that this is really an engineering work, and now you can get to know what are those core values.

If you Google Bungalo Principle, you will come across a very thin document the full title is the Bungalo principles of judicial conduct.  The United Nations office of drugs and crime authored this book, and in July 2006, the United Nations economic and social council, adopt add resolution that the Bungalo principles will become part of the UN document and will complement the basic principles of independence on the judiciary that were enacted a few years earlier.

So.  Every single judge in the world everywhere, must abide by the six values of judicial conduct.  So, we will go through them.  The first one is independence.  Every judge, every member of a tribunal must act impartially.  I'm sorry, bust be dependent, and countries must issue that judiciaries are dependent.

The second value is impartiality.  The Judge must be completely impartial.  Good to know from the engineer, whether you use AI, then there is algorithmic justice whether that is well impartiality.  The third value is called integrity.  A lawyer, a judge must exhibit integrity at all levels in the office,

The fourth value is called propriety.  Propriety means everything that you do must be proper.  Do you not speak in a very loud voice to a client, you should not dress like you are going to a music theater when you are a judge.  You should not become too drunk and AI taken that out because you'll have no ‑‑ work will be reduced.

Another value is equality.  You should treat everybody equally, and you should define all equally.

The last one, which everyone runs to it because it could easily be explained in AI is competence.  Many people think you can become more competent when you use AI, but these values must be upheld cumulatively.  Now I will start with Alexander, who is a practicing law and is also in the alternative dispute resolution area to shed some light on the relationship between artificial intelligence and judicial end opinions.

>> Thank you very much.  So, I think that independence is always about avoiding transparency of technologies.  So, let's begin with the Russian perspective.  When AI was first introduced, it was very hopeful that AI can help lawyers to draft some simple documents to maybe use some advices on simple issues, but the reality was not so bright, as we hoped, and we realized recently that AI can lead to information about legal material.  So, I will divide the doubts of using AI in judicial in particular, and in the law in general in two groups.  First of all, technical.  So, technical doubts are that as you know, artificial intelligence is based on datasets, and these datasets, is the information on the basis of which artificial intelligence can learn and develop.

And we as lawyers in telecommunication and technology, realize that now days, there are no such datasets which includes all the relevant information about legal issues, written or unwritten laws, judgments, or the actual drafts of documents.

And that's ‑‑ that refers us to the problem that if you ask, for example, GPT ‑‑ Chat GPT, you refer to a judge's work, yes, you cannot get a clear or truthful answer of information because the basis of information is so narrow.  It's so narrow.

The second ‑‑ the other group of doubts or problems, subjected problems, because now days lawyers deal with confident and conservative profession, and this is really ‑‑ and that is a problem, and at the same time, we provide for our clients a very confidential environment for their programs.  Now days, global IT corporations cannot provide such confidence of their products for lawyers to get help from them.

And in course of independence, I think we cannot trust AI to solve some legal problems to help judges to interpret norms because we cannot provide their 100% guarantee that such problem won't use confidential information and sell it, for example, to another party of the dispute.

>> MODERATOR:  Thank you very much for that intervention.  We go to another lawyer who will give us a common law perspective.  Please.

>> Thank you so much.  Very interesting to hear from my fellow panelist, and from a common law perspective, perhaps one thing that I would underline is how much the common law system relies on precedent, unlike the civil jurisdictions in which precedent sometimes plays much less of a role.  So, if you have a system that is so much relying on precedent, and this precedence can in the future be created with AI, then you start to embed in it a lot of the inequalities, and a lot of the problems if we don't address them at this stage.  And speaking about engineering, we have to think very well how much can both the lawyers and the Judges rely lie on those systems and there are cases in the U.S. where lawyers just use generative AI to generate their opinions and core submissions, and then the AI just hallucinates cases, and it looks very convincing, but in fact it's just an AI hallucination.  Maybe this is the first part that I want to raise your attention to.

The second one would be that using AI in the judiciary system helps if you already have a digitized system.  So, thinking about Estonia where everything happens remotely, when it's natural to be in front of the judge online is one thing.  I come from Romania, and the Court archives are very much real and you have to go there and scan things, so they go hand in hand.  And for countries that are not so digitized and don't have their archives ready, they have to first do this work and make sure that they do it properly so they don't leave out important parts of their system.  I think a lot of civil jurisdictions in developing countries could relate to that.  Thank you.

>> MODERATOR:  Thank you for that.  Now I move to the second value, and here I invite the President of Mios of Open Link to tell us what do you think from your professional perspective as global citizen, how do you think AI will impact the impartiality in terms of gender, in terms of race, in terms of income.  Do you think AI will equalize or exacerbate inequalities?

>> Thank you very much for the introduction.  So, it's my honor to speak on this panel on artificial intelligence, especially in the judiciary system.

We are noticing that artificial intelligence is transforming our societies like ever before, and this topic, specifically speaking about the judiciary system and what is really important for us.  It's extremely important for us because it define, a leap definitely.  So, and the judiciary system is a pillar of justice and democracy and everything is changing right now, so I would like to continue and describe one example to illustrate.  A judge in Colombia made headlines when he tried Chat GPT when deciding on competition for a case.  The chat bot provided case law, reasoning, and explanations, ultimately contributing to the decision that the plaintiff found fair, but what is the outcome can be different?  Would public trust in such decision be as strong?

So, this example brings us to a crossroad.  One, on one path, we see much potential for efficiency and fairness, and on the other hand, we face serious ethical and practical challenges.  So maybe we discuss about the judiciary system, we discuss about differences in every country.

Artificial intelligence is most significant promise lies in ability to process vast amounts of legal data swiftly and correctly.  A task that could take human judges days or weeks, you know, so and also because it's all about analytics and how we analyze the data.  So, AI can on the one hand, AI ultimate task like screen cases or assistant with like research and saving time and resources, but it can help judges avoid human errors by providing alternative perspectives or even find arguments.

So, China offers a real‑world example.  Judges across the country use machine learning‑based systems to support their ruling by building algorithms and correcting potential errors.  This is an example, and if judge disagrees with AI, they're qualified to say their decision in writing.  This is how AI is improving every technical system.

So, this highlights an important point.  AI can be a tool to human judgment and not replace it.  It can promote consistency in legal ruling and make justice systems more accessible, and efficient, especially in overloaded.  For example, I came from Serbia, a country in Central Europe if you don't know, and we have a problem, a real problem in many cases, and our justices cannot complete every case, so this is a really good example of how some technical systems generally technology especially like AI could help us to, you know, to change everyday practice.

So.  Speaking of challenges, we have significant challenges.  First algorithm data remains a concern.  All systems based only on some algorithms; this can be very challenging for society.  And AI systems are trained on historic data.  We speak about AI, machine learning and that's all, it's based on funnel mental stuff, algorithms and different algorithms.

So, this may reflect inequalities in the social system.  So, fact checked, AI could amplify existing case, leading to even discriminatory outcomes in sentencing, case evaluations or legal rulings.

So, can we allow such system to determine justice without understanding the principles of humanism and equality?

So secondly, there are serious ethical and regulatory dilemmas, who holds the accountability for an AI‑driven decision gone wrong?  Who will be responsible?

>> MODERATOR:  Thank you very much.

>> Third, cybersecurity shouldn't be ignored.  Requires highly sensitive and legal data, and so these prompt the ultimate question of to navigate ‑‑ to this territory and I will ask this question and please allow me time, how can we be sure that AI systems remain ethical and transparent.  What regulatory frameworks must be developed to count on AI, to balance AI in the judiciary, should AI remaining supporting to or is there a risk that could replace the human judge entirely?  And who will benefit AI, judges, governments or institutions?  Today's sessions offer us a chance to reflect on the opportunities and challenges, but I think we should go into the details and we shouldn't allow AI systems without completely, you know, understanding and analyzing every advantages or disadvantages of this approach.  Thank you very much.

>> MODERATOR:  Thank you very much for that.  In order of my four engineers here, one of them has a local story with the digital space, basically, and I invite you later to talk to him about his book and how he has put a footprint in the IGF and maybe the fora at the UN level.  The ball on integrity.  So, what do you think will be the impact of deploying AI in the courtroom and the judiciary in general, you know?  We have today an advocate, Omar, Siad, Omar from Pakistan.  Judiciary doesn't include judges only.  It includes lawyers, advocates, everyone working in the judiciary.  What do you think will be the impact of integrity in terms of how they disclose their sources, how they cite the right cases, et cetera, et cetera?

>> First of all, thank you for the introduction.  Let me qualify and disqualify myself.  I'm not going to speak as an engineer because I'm not an engineer.  But I have worked with engineers and I understand the rationale of the question.

I think what is necessary to address the topic of this session is some reflection.  The session is entitled and has the keyword judiciary.  If I understand, judiciary implies law.  Law means international law.  And international law means delivering justice.  So, to put it in perspective, ladies and gentlemen, my colleagues here, and let me quote Secretary‑General Guterrez that talked about how AI needs to serve humanity and not serve itself.

So, this is where I'm going to start by exposing how old some of you are.  Raise your hand if you remember the story of Dolly the sheep.  Anybody?  For those who don't know Dolly the sheep, and you'll see why this is relevant.  Dolly the sheep was a cloned sheep from another sheep in 1996.  Back then it was cutting edge.  Nobody had been able to do it.  And what's more fundamental to that was that less than a handful of institutes or governments could have done that.  It was very much restricted to a certain, small expertise of who could do this.

Next, I take you to the development of nuclear power, the nuclear bomb.  How many countries today do possess the nuclear bomb?  I can count them on two hands, and to actually develop nuclear power and a nuclear bomb, you need tremendous resources which are not at the disposal of the majority of governments around the world; and therefore, and this is why those who do ‑‑ who possess nuclear power today, are very attentive not to allow others to possess it.  So, I think you probably get an idea of where I'm going with this.  And then you jump in, ladies and gentlemen, to artificial intelligence and quantum computing, and earlier on we mentioned about my book.  This is not a plug about the book.  The title is Survivability and the context about quantum computing AI is a significant component of what I talk about in my book, and quantum computing, which can be now accelerated through the use of AI, is a fallacy of supremacy race.

So let me put it into perspective.  Allow me a bit more time.  The few countries and few companies around the world that can actually crack the nut about quantum computing, imagine the first company or the first country that cracks that nut.  You know the other countries are supremely exposed.  The fallacy means that the moment that one of them wins it, all humanity loses.  No one is going to allow themselves to be compromised.  Therefore, it becomes imperative on us to come up with a treaty of some sort to control the development of what quantum computing, and we're not there yet.

And now I jump to the AI, ladies and gentlemen, and the perspective.  This is treated philosophical but spectacle.  AI is also cutting edge.  It's Dolly the sheep, it's nuclear, its quantum, but what's more significant about AI unlike any other technology advancements we've seen in history is, it's at the fingerprint of anybody around the world.  Therefore, if you did not know this, and I'm speaking now from an expertise on cyber and geopolitics, et cetera, a 15‑year‑old could shut down a country with just, click, click, click, click.

So, what I'm bringing back to the point that I want to start concluding here with is the following.  Before we start talking about how AI can facilitate the judiciary or legal or marketing or influencing, we need as the Secretary‑General Guterrez said, we need to ensure AI advancement serves humanity and not serves primarily profitability, and therefore we end up having the wild, wild west and everybody is competing, and guess what?  Who pays the price?  We do.

I'm going to close by quoting something, and this is where you'll see it's putting it all in perspective.  In 2020, the COVID‑19 pandemic exposed western democracies systemic failure to prepare for a precedented threat.  This cost too many lives that could have been saved.  Moreover, it exposed great misfunctions in the global world order, legislative and democratic models and institutions.  It also crystallized cybersecurity defenses and designs accepted as gospel are normal fit for the purpose of defending a nation, business, and protecting the citizen.

In conclusion, we must create a framework of ‑‑ a regulatory framework to manage and control how the vision of the Secretary‑General is achieved, and based on that ‑‑ may I just finish my breath?  Thank you.  So at least that we can ensure that the development by all stations of AI is for the benefit of humanity.

And there is a way forward, by the way, and we can talk about this at another time, but remember AI and the ability to create devastation ‑‑ to create fantastic opportunities, but the ability to create devastation would be far earlier realized, and that needs to be controlled.

That's my conclusion and I'm sorry speaking of an engineer.

>> MODERATOR:  Thank you.  You can see he has a lot to share, especially the geopolitical perspective.

>> I took the liberty of asking the person who the code from for permission to need it.  It is in my book it's a back blur.  I'm not making things up.  This is not new.

>> MODERATOR:  Thank you.

>> Yes.  Just very briefly, my colleague said it's absolutely the most important because when we speak about technology, we all know that all technology start as a military project, and speaking about RAD in 1916, so we can use technological benefits of humankind to destroy everything, and all about critical technologies, and we know some of the quantum (?) in Moscow two years ago and Russian did go about quantum ‑‑ it's the core.

When we speak about quantum, about AI, about AI transformation, speaking about judicial systems, and I want to point to just one thing because this is a good example, how AI can be harmful for some communities and even countries.  Hold on one second, please just one minute more.  I'm from Serbia, we have some challenges speaking about country ‑‑ and when we ask maybe people don't know, but some countries recognize, and some not and so on so on.  And when we ask ChatGPT model, that's the highest mountain peak of Serbia, ChatGPT will answer, that's the mountain ‑‑ the peak isn't a mountain which is not part of Serbia, according to U.S. and some western views.  So, this is how AI controls some narratives, and this is very important in speaking about history, about different aspects.

>> MODERATOR:  Thank you.  Thank you very much.

>> Yes.  It becomes about history.  So, when we want to speak about AI in the judicial system, this also can be very tricky if you don't have the truth of what we think is the truth, and what is according to our values and traditions.  So, this is all about how we feel ‑‑ how we feed and what data we give.

>> MODERATOR:  Thank you very much.  This is the ‑‑

>> Why do you keep calling engineering, we're not engineers here.

>> MODERATOR:  It's just likely ‑‑ please, switch off.  Okay.  Thank you very much.  By way of moderating is usually to give power to the people, and so we cannot monopolize everything as the panel and we are also required to hear from the people.  I will start by throwing a ball to a very honorable member of parliament from the neighboring country in Kenya who also happens to be an advocate from Kenya, and I will ask him to give his side of the story, Honorable because we're neighbors, Tanzania and Kenya are neighboring countries; and also, I know after his politics, he will ‑‑ meeting.  So please.

>> Thank you very much, moderator, and esteemed panelists.  I'm the Senator from the Republic of Kenya in East Africa where the moderator we are neighboring.  Mine is, I heard one of the panelists talking about that if you want to use judiciary in AI, you must digitize ‑‑ you must digitize the judiciary system.

For example in Kenya, we still have like in your country, they still, you know, the judiciaries or law is a very traditional, conservative way of doing things.

So, my question is, how do we develop?  Because I've heard my brother Milos talking about the reasoning, the precedented thinking, the outcome, the judicial outcome was totally different.  How do we develop, talking about learning and China.  How do we talk about coming from a traditional background to a more important background where we have judicial independence, there is what you call judicial independence, judicial activism, and also judicial criticism.  When in my country, for example, where it comes from people take with a pinch of salt if you were to call judicial critique.  How do we do judicial critique, judicial independence, vis‑a‑vis the use of AI?  Because artificial intelligence in terms of a background where you have not been able ‑‑ and I'm speaking about this because I was the former Senate Chair of justice affairs and legal and human rights in the Senate in the last session because we go for five years in Kenya.  Those are challenges where it is hard to use technology in judiciary, and my brother the Judge who is the moderator, will tell you also in Tanzania, the Judges still use the shorthand of taking pleadings, of giving judgments and rulings, and you find the Judge entering into their chamber, going to write in shorthand, how do we translate from that technology and also not affecting the precedent because, you know, (?) referring to the decisions that were judicial decisions.

So, coming from traditional or more conservative profession, how do we transfer this from technology and artificial intelligence.

Finally, maybe finally, with all due respect is, how as parliamentarians, how should we ensure yesterday is about AI, and not making ‑‑ AI has not boundaries.  How do we legislate in terms of legislation and judiciary and technology and also collecting correct narratives because people are writing history.  Thank you, moderator.

>> MODERATOR:  Thank you.  Can we get a few more questions before I invite my panelists?  Yes, the gentlemen.  And then after that the Dr.

>> Hi, I'm from New Delhi in the academic research center and recently in partnership with UNESCO we're speaking to a lot of judicial stakeholders across South Asia, and it feels talking to them, there is some gap in the conversation that's happening in Civil Society and academia, and the consult maybe from judiciary because we're still talking about judges use of AI, judge's understanding of AI, which is fine, which needs to be talked about, but the problem that is very soon going to be facing or already facing that they are, when it comes to institutional deployment of AI systems, whether it is for digitalization purposes, whether it is for case sorting or classification, any sort of core assistance, judges are ultimately not going to be building those systems.  They are going to be evaluating systems that are built to them, sold to them, and ultimately they're going to take a call on what systems to buy, what systems to procure, so it seems that much more than investing on judges, understanding AI, which is ultimately more of an individual pursuit, and we perhaps need to invest in judiciary as an institution's ability to evaluate AI systems to understand what works for them, what doesn't work for them.  Why I feel, when talking to judges, that seems to be much more of their concern.  They do understand the risk associated with it.  It's not ‑‑ you know, they're used to dealing with issues of violence, they're used to dealing with ‑‑ so when it comes to algorithmic violence, it's about technical upskilling, they get that.  But ultimately the fear is that if Civil Society and academia doesn't step in to do it and if it's left to corporates, then five years down the line we see a situation where there will be a Microsoft of legal tech, where there might be interlock monopoly in the judicial market, so now maybe is the time to address that.  That was a short intervention.  Thank you.

>> MODERATOR:  Please.

>> Thank you so much, Honorable Judge, Moderator for this session.  I wanted to share my contribution.  It's not a question, really.  It's an intervention.  I wanted to talk about the Bangalore principles that you spoke about, principle number 3 on integrity, number 4 and 5 on equality.  I just want to juxtapose that with a contribution and a question from the Senator as an advocate of the High Court in Kenya, in terms of their traditions.

I understand, you know, the judiciary has some customs and traditions that have been upheld for, you know, centuries.

But the issues of judges, you know, understanding and using AI do not compromise the issues, for example, the issue of integrity.  The issue of integrity does not come from the papers.  It comes from the heart.  It comes from the mind.  So how can artificial intelligence, for example, enter into a judge and erode the issue of integrity?  It is ‑‑ it is my submission, my humble submission that the issue about traditions that ‑‑ traditions that need to be done away with because if you still have judges taking shorthand and retreating to their chambers, they are not upholding the principle of transparency.  Because if you are doing everything out there, you are typing your notes and they are appearing on the screen in the courtroom, people will understand, will know that, you know, my case is being treated fairly, and you are becoming good.

So, I think the issue of some tradition, it is my humble submission that there are some traditions that need to be done away with.  I would like to comment a little bit about the impact of digital Tanzania and what it has done to the judiciary in Tanzania.  Some of the traditions that we used to have in Tanzania, included that you need to have the paperwork filed in a court.  Now days you can just sit on your laptop, and you file your case online.

>> MODERATOR:  Thank you very much.

>> Yes.  Thank you.

>> MODERATOR:  I will give my colleagues.  I will come back to Siad from Pakistan to give us a short intervention before we take another round.  We have a few minutes left to conclude this interesting session, but like I said when I was opening, this is just ‑‑ this is a work.  And I just use this example of an engineer, but anybody who is in knows there is quite a few things to change, et cetera, like was said there are some old traditions that have been with us, and there are those which are very, very important to us.

Do you need to intervene?

>> Yes.  Actually, I thought the questions were very, very relevant.  Let me address the first question.  Apology, I didn't catch your name.  My hearing is ‑‑ old age.

If I recall correctly, you're talking about how to ‑‑ the processes of utilizing AI in the process.  There are two things that I will answer you.  One, there needs to be at the national level a regulatory framework of compliance when it comes to an AI framework.  We are in an advanced stage of working on that where I am, and I'm not at liberty to share it and talk about it here, but question talk sideline after the session.

The other component about how judges use it or staff or courts, et cetera, just like in any legal format, if I put, say Milos is my friend, and let's say I put something out, and it's in my name, and I say he's a bad person, and I'm actually incorrect in labeling that.

So, the point that I'm making here is any utility, any usage of AI to streamline, to become more efficient in any process, legal, marketing, whatever, must be reviewed by human beings, and human beings need to put their name on it.  This is a recommended process.

So, in other words, if a judge ‑‑ a ruling needs to come out, and I'm not a lawyer, but I understand more.  If a judgment is going to come out, and it has to ‑‑ it is utilizing any form of AI, before it becomes a judgment and it becomes public, and has the name of the judge or court or whatever, it must be reviewed by human beings to validate that what's putting in is not any hallucinations.  Because then you're creating precedent with hallucinations.

And we already heard AI is reformatting what history is.  So, if you go and search about certain things, I'm not a politician here, and all of a sudden you discover AI and the powers that be that are actually controlling it, have actually wanted to believe that the earth is flat.  And guess what?  You allow this for the next few years, and all of a sudden, there will be research papers referencing, just like a lot of people reference Wikipedia, which a lot of it, a lot of the information on Wikipedia is unauthenticatable, it's untrue.  That's my two cents.

>> MODERATOR:  Thank you very much, colleagues.  I go to Anna to give us her perspective.  You alluded to real cases before, counsel.  I was very impressed you made some real examples of cases that have been.  Please, welcome.

>> Thank you so much.  Happy to be on such an active panel.  I hear great thoughts from the audience as well.  Maybe to answer directly, first, on justice systems that struggle with getting digital.  It's very much the case of my country.  Perhaps, it can be done sector by sector.  Something that could be ripe for that could be a small claims court.  Perhaps you want to look into Estonia.  They put that in place with small claims, but it has to be done with human oversight.

I wanted to talk about the external companies which is a concern for every sovereign nation, but it doesn't have to be the case.  Here I will promote a bit the work of my colleagues from People Foundation, perhaps you attended their event, and they gave real examples of how you can build your own model because it comes back to the data that you own, the data that you feed the algorithm, and there are Open Source models that you can use, and perhaps implement, even into the justice system, and that could be a project done together with the justices, and they can get involved and take ownership over the process and talk with the engineers that do it, but this requires not so many resource, and you could build for yourself something that you can rely on and take ownership on, not only on the data but as on the knowledge that is in the system.  Perhaps can you get in touch with them and learn more about it.  Thank you.

>> MODERATOR:  Thank you very, very, very much.  Milos, please.

>> Thank you.  I just want to add one thing.  What Americans did great ‑‑

>> MODERATOR:  Please?

>> Okay.  I just want to add one thing what Americans did great in speaking about standardizing technology, especially about what they did speaking from committee IT revolution, TCP‑IP and rediscussed this in previous, but when you speak about traditional intelligent, they also standardize models as mentioned.  So, if you want to build your own model can you do it but it's not only about model, it's about critical infrastructure, it's about data, it's about processing the data, who controls your data, and in your country, and so on, so on.  So, when we ask some systems of artificial intelligence in China for some question, probably the question will be different than with U.S. system or Russian system.  If we ask about time and about some different aspects, you know.  So, it's all about countries, and we came to technological sovereignty, and this is really important topic, and technological sovereignty, data sovereignty as part of technological sovereignty and generally speaking critical infrastructure, when we visited IGF in two years ago, we discuss what problems, what problems are in Africa, connectivity, infrastructure.  So, if we want to use technology, especially emerging technologies and critical technologies like AI, quantum, post‑quantum photography, everything that we mention, buzz words today, we should think about ecosystems.  So, without ecosystems, without proper infrastructure as basement, we can't do so much.

So this is what we should think about, and I would underline that we speak about technology, we have different aspects, but we shouldn't forget technology, speaking about ‑‑ this all started in the United States, so speaking about technology from that perspective, I like how they think speaking about models, about standardization, so on, so on, but it's up to us if we want to use our own models and our own approach, thinking about models on AI, and speaking about some different approaches on hardware like China did or some aspects of how Russia protects their own infrastructure and data of citizens and so on, so on.  So, this is mine, and if you want to do digital transformation, and if you want to transform our societies to be more digital and more human in the same way, we should think overall.  So critical infrastructure, data, and health deal with the challenges.  Thank you very much.

>> MODERATOR:  Thank you very much for that.  Very broad.  Alexander, please, very quickly.

>> Okay, so to address the last question about the quality and integrity, I think that we should divide the type of work, the legal work that we can trust for AI to decide, because as I have seen practices as a lawyer in Russia, when judges interpret the norm, the written law or unwritten law, it usually bases its decisions not only on his experience as a lawyer but on his experience as a person.  And that is very important, and I don't believe AI in each state right now can project this experience as a person, that it's based on system.  It is not human and we need to understand that.

So, my suggestion is to use AI on research work, on simple tasks in the judiciary, but do not trust AI to decide on human lives.  For example, in criminal law, yes, and in civil law for citizens, not only for business, I believe that side questions still need to be decided by the person in his experience as a lawyer and as a person.

>> MODERATOR:  Thank you.

>> I just want to confirm one thing.  It was between the United States and China recently the Presidents discussed about nuclear weapons and AI and so on, and they agreed on one thing, that they will not allow AI systems to control military facilities and nuclear weapons.  So, when we think that the world is crazy, there are some good decisions, you know, and people think about critical infrastructure and that's what I support.

>> MODERATOR:  Thank you very much.  Milos is speaking in terms of the global big picture, the global picture.  Thank you very much for that.

Now, we are finalizing.  10 seconds, I want to add to what was said.  Not to allow to affect decisions that affect people's life, and then you also add lives and livelihoods because some people might understand as it is a case on let's say capital punishment, we're talking about lives and livelihood.  Even in domestic case, the use of AI must not be utilized in any code‑renders judgment.  That's one way to put it, and if you agree.  And now we need to be clear in the context so that people can use this as a guideline to not fall into that trap.  Sometimes people are looking for efficiencies as a balance between efficient versus effectiveness, and due process.

>> I'm going to the last quality or principle which is competence.  We all need very competent judges.  You need to work in the room knowing that this judge is hearing my case and he knows his stuff.  It's like walking to the operation room and the doctor who is doing this surgery on your brain doesn't know his stuff very well.

For many, many years, thousands of years, there has been a very strict way of getting judges who know they’re there, have information on their fingertips, and they have the evidence that is brought to them.

And lawyers or advocacy are like the litmus test.  They're the ones that say this judge is competent or not, even if they don't say it loudly.  Siad from Pakistan, I throw the ball to you.  Do you think AI will make us competent or less competent and why?

>> Yeah.  This is very important.  I am from Pakistan.  The thing is that it depends how you utilize things.  Like AI at the same time would be good and at the same time it would be bad.  Like the way it is affecting the high education at the same time is like the same in Pakistan.  The search world is affected in Pakistan AI.  When it comes to the judges, in Pakistan, like in the developing countries, our issue is not to have AI in codes like that.  We have independent judiciary.  Intervention of political judiciary, deployment of the judges.  It depends on the areas, the places, like if we think about the Europe, utilize in a very good way.  When we think about the Asia, Africa, so it depends.

So, I believe, yes, it can help in terms of the judgments, the speeding of the cases, like if you write a judgment for it, and you use help of AI, you can make it do wonderful things.  I believe, yes, it can help.  But at the same time, it may take the creativity of the judges.  That you see in the last session, the Judge has been removed from his job just because of the use of some non‑legal in the judgment.  I think it depends on the areas as well.

>> MODERATOR:  Thank you very much, advocate, Omar for that intervention.  Ladies and gentlemen, I was told that this is ‑‑ so unfortunately, we will not have another round of questions because I was told that we must end at 1:00 p.m.  But if my panelists, each one has just one sentence?  One sentence to finalize?

>> I think we should talk to each other.  This is what we learned in the session.  We can continue on talking and learning and implementing the best practice.

>> MODERATOR:  Thank you very much.  Colleague?

>> And that will die.  You said one sentence.  And that will die.

>> MODERATOR:  Okay.

>> Foundation and give a chance for discussion and collaboration.

>> MODERATOR:  Thank you.  Alexander?

>> Dear audience, of course we need to use technologies more, but we need to be more conscious on what we use and where do we use it.  So, please be mindful of using AI in the judiciary if you're lawyers or judges, and be creative, and be present.

>> MODERATOR:  I like that.  I will take that and remember that yourself.

Finally, to conclude ‑‑

>> The view from Africa.

>> MODERATOR:  ‑‑ yeah, the view from Africa is to continue to be human.  Many people come to my court as a judge, not because they really, really want a solution but they want a human being to talk to and to tell them their story.  So, when I just listen careful, somebody walks out healed.  That's what Alexander has concluded with, she has said to be human.  So, I hope that we will embrace AI, but we will still be human being.  We will not be too strict on our laws where there are ways of doing justice.  A big round of applause to my panelists here.

Again, thank you, Roman from IGF and thank you IGF for organizing this.  I'm sure next year if you happen to come, I will have judges from around the world, and we will them the names of each of these engineers who have helped us to make our house better.  God bless you.

(Applause).