IGF 2024 - Day 3 - Workshop Room 5 - WS162 Overregulation- Balance Policy and Innovation in Technology

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR FIUMARELLI: We invite everyone to sit at the main table if you want, so you can be more engaged in the session.

Yes, we have one speaker that is stuck in the route, in the Uber, but we will start the session and then he will join later.

So okay, good morning, afternoon, everyone, for the ones online. It’s a great pleasure to welcome you all to this workshop called Uber Regulation, Balancing Policy and Innovation in Technology, under the sub-theme of the Harnessing Innovation and Balancing Risk in the Digital Space.

My name is Nicolas Fiumarelli and I will be moderating the session. I represent the Latin American and Caribbean Group from the technical community and it’s a privilege to be among such a distinguished group of panelists and participants. I am very glad that we have this quantity of people in the room.

I think the session title is very interesting for you. You know, we are in an era when we need to decide if regulate or not regulate, so this is a hot topic nowadays. You know, technology and innovation have always been drivers of societal progress.

However, the phase space evolution of the digital technologies, especially artificial intelligence, presents unique challenges. So how can we foster innovation without stifling it through over regulation? How do we ensure safety and the ethical standards while allowing technology to reach its full potential?

These are some of the critical questions we are going to address today, but this requires collective deliberation, so you are all invited to give your ideas and today we aim to address them. So the session will be conducted, as you know, in this roundtable format to encourage equal participation and also interaction among our esteemed panelists and the audience. To set the stage, I just will briefly introduce our panelists.

Following this, each of the panelists will take a moment to introduce themselves and share a first motivation for participating in this session. Afterwards, we will dive into the core discussion, addressing some of the key policy questions. Toward the end, we will open the floor for questions from the audience, both online and on-site here, moderated by our colleague, Jose. So let’s meet our panelists.

First, Natalie, Natalie Tercova. She’s the Chair of the IGF in Czech Republic, also a member of the ICANN, and Vice Facilitator on the Board of the ISOC Youth Standing Group. You know, the ISOC Youth Standing Group, together with the Youth Coalition and the Internet Governance, every year organize youth-led sessions to bring the young voices to the Internet Governance Forum. She is also a PhD candidate, focusing on the digital skills of children and adolescents, their online opportunities and risks, with the emphasis on online safety, online privacy, and AI. And she has recently contributed to a report on AI usage in medical spheres, exploring the challenges of deploying AI technologies in health care. Additionally, her works includes critical research on the role of AI in addressing child sexual abuse material, called CSAM. So Natalie, will you please introduce yourself further and share your motivation for joining this session?

>> NATALIE TERCOVA: Thank you so much, Nicolas. Can anyone hear me well? Perfect. So as you said, thank you for summing it up so perfectly. I am representing the academia stakeholder group. I am a researcher. It’s my day job. And I was recently very much focused on the AI and how it can impact the critical topics I’m focusing on in my research, which on one side is the health system and, for instance, also finding health information online, how people trust health oriented information provided by, for instance, AI driven chatbots and so forth. And on the other side, I’m also invested in the topic of CSAM, as you mentioned. So harmful content focusing on children and also abuse of such materials, depicting children in intimate scenarios where AI is perceived more as a double edged sword. So I hope to tell more about this during the session as I feel this is a crucial topic. Thank you for having me.

>> MODERATOR FIUMARELLI:  Thank you so much, Natalie. This is, as I say at the beginning, a hot topic, right? We are here to decide whether it is good to regulate or not regulate. There are several factors that will bring us to think about regulation. But on the other hand, you know, this could undermine human rights, like the access to information and different ones. So there are several ways to regulate. So we are looking forward to deep dive on these kind of things for maybe to have good outcomes and some key takeaways on how policymaker actually will have a solution for this kind of issues. Now I will introduce Paola, Paola Galvez. She is Peruvian and is a tech policy consultant dedicated to advancing ethical AI and human centric digital regulation globally. She holds a Master of Public Policy from the University of Oxford and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru and a team leader at the Center for AI and Digital Policy. Paola brings her unique perspective from her work at the OECD and UNESCO on international AI governments. You all must be hearing about UNESCO these days because every country in the world actually have the national AI strategies and UNESCO has the RAM that is the Readiness Assessment and drawing on her experience with UNESCO. AI RAM in Peru, she will provide some insights into this balance in regulatory safeguards and fostering innovation on the global scale. So Paola, could you introduce yourself and tell us about your motivation for this workshop?

>> PAOLA GALVEZ: Good morning, everyone. Thanks so much, Nicolas. Thank you all for joining this session. I think it’s really a critical discussion to be having, but I will put the different opinion here. I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right? Let me go one step back because you asked me to introduce myself a bit, and you did so well. Thanks so much, Nicolas. Just to give a bit of an overview, my perspective here comes with a background starting with the private sector. I used to work at Microsoft for almost five years, and it was me saying, innovation must come. This does not prevent innovation in my country, developing one.

Advocating for outer regulation. That is back 2013.

When artificial intelligence just started in my country, other countries it was way developed, but the topic at the moment was cloud computing.

So just to mention that was my first version. And then I work at the government. I advise the secretary of information of Peru, that was a meaningful role, I contributed to the strategy and led the digital  --  strategy. I understood how it was to work in the government, what are the challenges inside. I'm not saying good or just anything but it happens.

Then I paused my career and went to Oxford to study, at the moment I'm independent consultant, contributing to my country because I just finished UNESCO methodology, I can tell more about it later. Also, Spanish, Edonia Lab.

Trying to make AI benefit everybody through evidence-based regulation and capacity building, directed to women.

Now to go to the topic what motivated me when we last had this discussion, it was from moment zero, you thought yes, now that I'm in the global perspective, I live in Paris at the moment. This question is not only happening in developing countries. Even I speak with a lot of startups in Paris. Now they don't know how to implement the EUAA act, there's been a lot of doubts.

I would like to start and leave it here by giving three key questions to start this conversation.

I think first we need to think what is the public policy problem.

What we want to regulate comes from that first question. It is needed to address problems, let's find a more adequate solution by finding the problem. That's the first step.

Second, when to regulate. Map regulatory instruments, because most of us have consumer codes or law, not everyone has that protection law but that's a good start, right? And intellectual property laws are in place. So let's see what we have.

And specificity in adopting these.

Bringing to congress, here, I remember someone working in parliament, it takes a lot of time. So if we can start enforcing the laws we already have, that's a good start.

There are several approaches to regulate AI at the moment. And there's no one single best one but we need to find according to our context. Thank you.

>> MODERATOR FIUMARELLI:  Thank you, Paola. What is the public policy problem. And then it's about when to regulate and at the end it's how to regulate.

So now I will introduce Ananda here on my left. He was stuck in traffic, but he made it. Thank you, Ananda to be with us, represents the internet governance and global AI governance expert. He has extensive insights in the global regulatory impacts, please share more about your work and what you bring here in our discussion.

>> ANANDA GAUTAM: Thank you. I have the hiccups. We work with young people and how we bring them to the global internet governance, not only global, but how capacity building, at regional and national levels. That's my major focus right now.

I am also working on different AI policies. Paola and myself joined together, I think that's a part our cohort.

We have been like learning about how developing AI landscape is affecting, my perspective is, more concern in the developing nations, because I come from the Global South and from the Global South is Nepal, south, many challenges, outdated legislation. UAEE Act. So-called process affect is affecting the legislation worldwide. Countries like Nepal are trying to build their own AI policies but how do we build the capacity of those developing nations so they can build comprehensive AI policy that will actually leverage the power of AI in developing those nations, and of course, how do we build our capacity of young people in engaging in AI governance or processes.

Another thing is when we talk about digital divide, we have been seeing AI divide. People having access to AI and people not having access to AI.

My focus would be on how do we build capacity of other stakeholders so we can element this device. I will go on other things on the second round. Thank you, Nicolas.

>> MODERATOR FIUMARELLI: Thank you. There are difficult areas, as you mentioned. Not every country is prepared or has different challenges with legislation. Every legislation is different in every country.

In the light of the AI act, that's a mandatory thing. It's without frontier. I.P. addresses aren't like country means, is it not easy to see how legislation from a country can regulate, because it is a new frontier.

As His Excellency from Saudi Arabia said in the opening ceremony, we are in the AI divide now. This is a new concept that we need to take off.

See how to leverage the power of AI as you say, Ananda, and about the young people using AI a lot, right?

Now I'm going to introduce James, that is our speaker online.

If the technologist can put James on the screen, that would be great, he comes from the private sector in Africa, where he focuses on innovation and impact of regulatory practices.

James is going to share an example how Africa regulation navigate the regulatory challenges in the face of persecute, right?

So James, are with us there? Please introduce yourself for the people here onsite. We have a full room. So share what your interest in this discussion, please.

>> JAMES AMATTEY: Thank you, Nicolas for that introduction. I'm from Ghana, I come from product development, social development we have had to create enterprise products for education and insurance. And also in banking. That doesn't happen in a vacuum. My friend Ananda was stuck in traffic. Talking about Uber, Uber is one of those innovations that, the advancement of technology and one of those is something we call GPS tracing, or GPS tracking. Without that GPS tracking, and embedded in phones, we wouldn't have something called Uber. And without the internet, a platform like Uber wouldn't be able to locate our friend and get to the site.

These are some of the things we tend to lose sight of sometimes.

When we try to build software and we try digital transformation, we think it's about doing customer research and doing market research.

But in other societal and regulatory things that need to trigger innovation to happen.

So for example, I just gave an example of Uber. Now there are certain examples around regulations. When we look at internet, it was the coupling of AT&T, and that led to widespread infrastructure development that brought forth the internet, right?

So without that we would not have things like broadband.

It's not directly correlated but breaking down the regulation that led to AT&T, especially in the United States is what sort of brought forth advancements in is the development of the internet as we know it today.

Now there are certain times when policy tries to get in the way of innovation.

I will use one of our local laws as an example. A very notorious regulation that came to bring attacks on, should I say, financial transactions online. Right?

Now the problem with that is Ghana was at the beginning of digital financial literacy. So a lot of people were beginning to transact online. The law itself wasn't bad but the timing and implementation of it, didn't have a lot of, should I say, public approval.

According to reports there were times when the government of Ghana missed revenue targets from taxation. And also utilization of global money also reduced tariffs.

Sometimes regulation may have a good idea, but sometimes it's the implementation of the regulation that makes it hard for innovation to come forth.

I think, as much as regulation is important, we also have to look at the timing of the regulation. Especially in Africa where we are mostly now catching up to a lot of innovation. We do not have a lot of home-grown built solutions. So most of the solutions we use are important. So we have to take our time with regulation and to make sure there's enough understanding, there's enough appreciation and there's enough, should I say uses, or use cases for the technology before we try to regulate it.

Now I do understand sometimes the timing of regulation is important.

As much as we do not want any regulations, we also do not want regulations where it's very difficult for where technology runs ahead of society and it's very difficult for us to contribute.

So thank you very much for the floor. Over to you, Nicolas.

>> MODERATOR FIUMARELLI: Thank you so much, James.

You also touched on this idea of regulatory frameworks. It sometimes happens for regulation. Needs to be balanced. We need public approval. There are different cues in country, you mentioned digital money reduces tariffs but that could be difficult for someone who doesn't know how to use technology to use that money online. So the implementation of the regulations sometimes is a challenge. For people who have digital financial literacy, as you say, this concept.

Finally we have Osei, he is on the Online Moderator and taking questions from the chat. He will address questions and direct the people here on site to take the mic.

Osei, to ensure participation for our virtual audience and on site, please tell us about your role in internet governance in supporting this session.

>> OSEI MANU KAGYAH: Yes, hello. This is for "Overregulation: Balance Policy and Innovation in Technology"

I will be your Online Moderator. If have a question, please raise your hand or put it in the chat box. This topic is very, very interesting to me.

You put it, how do we go about regulation. The issue about regulation, I think we have moved past it. Is it a silver bullet? And if we are going about it, how do we approach it? The nuances we hope to tell.

We are very excited to join this conversation, bring all your inputs and we will dive deeper. Thank you. Over to you, Nicolas.

>> MODERATOR FIUMARELLI: Thank you, Osei. We will begin the discussion of today's workshop.

Each speaker will have about 3-5 minutes to respond to questions posed to them. Question number one, which regulations hinder AI and how could this be reformed to ensure safety, Natalie, specifically for you, about your recent work, focuses also on AI in the creation of this material, right? Can you explain what CSMA is, whether you see the need for regulation there?

>> NATALIE TERCOVA: Thank you, Nicolas. Some of you may have joined my lightning talk, which was Day-1. I hope I will not repeat myself here. Continuing the discussion on CSAM, survivors, society and well-being of those involved is now at its next step when we talk about AI and how does this come into place. It's deepening the issue we are focusing on.

First let me start by saying what it is. Usually we talk about child pornography, all these types of terms are more known to us. We are talking about CSAM, child sexual abuse material. It is more broad, it's something that can be manipulated. For instance, AI is using models, in the Czech Republic we saw images partially from already-existing materials that were not harmful, spread online, sometimes from the child themselves, sometimes from a caregiver, from a teacher who captured moments somewhere, at the school property.

However, the AI stepped in, or someone using the tool to make the person naked, for instance. And suddenly, such already existing material was abused and transformed into CSAM. That's why we are focusing on AI in relation to CSAM. Now introducing AI, we have lots of discussions on how this could be potential for harm, unfortunately. However, also some people perceive it as potential to use AI for detection.

And there is this clash between can AI tools and new emerging technologies be something that can help us tackle this issue? Or if it's going to make everything way worse. This is what I want to bring to this debate.

Just to give you an idea how prevalent the CSAM is, I have a few stats. In 2023, over 250,000 websites they were hosted CSAM materials and they were detected and taken down.

However, we also have to be mindful that this is just the tip of the iceberg. There are so many things we don't know about, could be in some closed forums, in the deep web and so forth. This is just the tip of the iceberg and it is already so alarming.

If you look at specific types of material, pictures, videos, these are around 85 million that has been reported globally in the year 2021. This is a very alarming number.

We have to talk about deep fake technologies and how to use advanced editing tools to make it easier for those perpetrators to manipulate it to make images and video suddenly used for CSAM. This is not just for those who use for excitement but also there's big money involved. They know there are people who are willing to buy such materials from these people.

We are now trying to find a balance between how we can still ensure that people are using the technology, which has potential to help us tackle, not just CSAM but also protecting privacy and protecting the privacy of those most vulnerable ones, in this case, children.

Right now we are talking about potential software that can detect when CSAM is around. This can go also back to grooming. The act when the perpetrator is slowly manipulating the child. This is happening through text, usually. But that would mean software would read the text, that's a big clash between privacy and safety.

Here I'm excited to hear your opinions on this issue. Where we can find the sweet spot. Find the balance between the right to use these technologies in our advance, use the technologies that AI and emerging technologies can bring us but also to minimize the risks that are involved with it. Thank you.

>> MODERATOR FIUMARELLI: Thank you. As you mentioned money is involved here. There are people who want to buy this kind of content. Also how to insure this balance. AI brings a lot of innovation for creativity, if you see the advancements these days, it's useful for a lot of words. If you see the policies on artificial intelligence on the replacements, something that is happening worldwide, the digital divide is increasing because of this, the AI divide.

But how to  protecting the privacy, as you said.

How to avoid these kind of practices. In my opinion, as well, it's like some countries have this approach like, maybe, they can install software in mobile phones or having women with mobile companies that are 2-3, not so much.

Maybe it's easy for them to do this technology. But on the other side, back to privacy and human rights.

It's difficult to balance on this critical issues such as child pornography, et cetera.

It's not a new thing. It's been years talking about these issues. There are different views and opposite directions. Thank you for touching on this topic to introduce.

Let's go to another area. That is the policy question #2.

How can policy makers that we have, policy makers here, how regulatory design more flexible regulations or advancement, as we were saying but without compromising on some ethical standards on public safety. And here we are touching on the ethical, right? We can mention about bias, or copyright. Different issues that are not on the same page as privacy, but created to emerging technologies. Paola, in your opinion, what are the regulatory approaches to internet governance you have seen.

Is there a particular model, UNESCO framework or other documents out there, that are conducive to encouraging these innovations?

>> PAOLA GALVEZ: Thank you, Nicolas. There are different approaches. But to be unified in one idea. I will mention one approach. There could be a mix when the policy makers decide.

I would like to decide on five of them. This is not an exhaustive list. It could be more. But we have seen risk based, AI, rules, and outcome based.

What I'm about to explain to you, you can read more about in the report AI generative, it was published this  year.

The risk-based approach is the most common one, the European Union adopted it. It optimizes because it focuses efforts on areas with higher risks. And minimize burdens in low-risk areas. Advantages, it allows regulatory frameworks to be flexible and adaptable. But the challenges is risk assessment are complex. At the moment we see the AI office is developing the guidelines. There is no one model under risk assessment that can be done.

I've seen on the market developing different ones. But how can we be sure this risk assessment is the correct one, right?

We are still in that process.

Second is the human rights-based approach. Which should be, in my opinion, the best one. Why?

Because with this technology, Natalie mentioned and Nicolas, bias, deepening inequalities, plus different other challenges.

However, we cannot afford not being tech optimists. AI is the reality. It holds tremendous promise. I believe when it used wisely, it could have potential to hit SDGs and help us be more efficient and for no reason we won't be replaced. At least from what I've seen now.

But human rights are at stake.

These human rights approach means being grounded on human rights law. The advantage is the scope is not limited.

In fact, AI system and the whole cycle of AI system must be under this regulation. Should be developed and deployed in a way that respects and upholds human rights. There's no doubt. What are the challenges of this approach? There's complexity and capacity, we know these systems are called black boxes.

It also comes with the complexity that human rights protection are sometimes broadly worded. Are hard to interpret sometimes, so we need lawyers specializing in international human rights law. That is what is lacking, in my opinion, we don't really have these people at the table. And there's no example this. On the hard law, human rights and rule of law, the one putting human rights at the center, but that is not mandatory. We have seen different processes under way. Let's see how it goes. It sets basic principles and global standards in terms of what we want in AI.

The principle based is the one adopting most of the countries, the U.S. with executive order on safe, trustworthy AI.

Singapore. What is it? Fundamental principles, right? Fairness, accountability.

Intended to foster innovation. To your question, the principle-based approach could be the one that prevents, stifles innovation, but protecting human rights in a sense with these principles.

Fairness, right? No harm, but it's not complete. Then rules-based is Chinese approach, generative AI as an example. It's very rigid, high compliance costs. But, it lays out the detailed rules, so it really doesn't leave much space for interpretation, that's what they apply in China at the moment.

How can you measure the outcomes? It could be very bad, right? There's a risk to having a brussels effect. Doing a copy/paste is not the solution. We can take best practices if they are already in place but it's hard to see really the result of the implementation of the EUEE Act. We can't tell at the moment. Also, I would say if you have take away, it must be public participation, a meaningful one, all the stakeholders at the table, discussing what are their needs and finding a solution.

Methodology has this public consultation process. And brings to the table the opinion of the public and the citizens. Thank you, Nicolas.

>> MODERATOR FIUMARELLI: Thank you, Paola on the five different approaches. I took some notes there. Also some noticeable comments you made on each of those.

On the same line I will ask now Ananda, what are global examples of flexible AI governance that can inspire policy makers nowadays?

>> ANANDA GAUTAM: The title "Overregulation: Balance Policy and Innovation in Technology", let's go back to 70s, if internet was regulated in the beginning, in the first three decades before the WSIS started we couldn't have the internet like we have now, talking or discussing about internet governance. So regulation is not always the best way of, what we call governance. One of Nicolas asked me what is the best example, UNESCO framework is one of the greatest example, which has been endorsed by, I think, more than 90 countries, 100 now, I think.

WSIS has been working on concept, this is how principles can bind people rather than legislation.

If you ask me, is it legislation or policy? I would go for policy-based approach which would harness the power of AI, rather than regulating it and like we call it overregulation.

Policy would be something that could promote the businesses without undermining the human rights. I reflect on what Natalie said, scammers are manipulating with AI.

At the same time, cybersecurity tools are being developed which use AI technologies to detect the patterns of the cybersecurity faster. Sometimes automated countermeasures are applied. There are many tools that many leading cybersecurity services that deploy AI.

So these are some kind of things. I think it's in the very premature development stage. We have just seen the power of generative AI when ChatGPT exploded and there are so many available and we have only seen the power of generative AI.

One thing is while people are using AI we should be very clear in terms of legislation or policy, how it will be used by public. Like today, school children are using AI or ChatGPT to add on their knowledge. Will it give the right knowledge or not? That is very crucial.

This is very important, it needs to be considered, it is also covered by different frameworks that are being developed.

But according to the national context, we need to have how people will leverage that thing.

If something is entered by AI, how do people distinguish those things? Maybe we call it AI literacy. People need to know what they are using, what are the consequences of the thing. They need to be good enough to distinguish between what is being generative by AI and what is not. I think those are the baselines that we need to focus on. I will stop it here.

>> MODERATOR FIUMARELLI: I like your ideas, Ananda. They lack enforceable mechanisms, we have problems with each approaches. Paola says with conceptual adoption. So together these approaches highlight multi-disciplinary collaboration and tailor strategies (?)

So going now for the online, James? We would like to see your face. From your experience, how has regulatory flexibility impacted on the African innovations, in your opinion?

>> JAMES AMATTEY: Yes, thank you very much.

I think your question is very interesting. COVID really catalyzed or highlighted the need for innovation over regulation. During COVID, there was little regulation and all innovation. Together with that innovation we were able to control what caused the spread.

Should I say, the lifestyle change that came with COVID.

So what we want to do is we don't want a case where it's just emergencies that allow us to be flexible with laws. But we want to adopt a lifestyle of having that flexibility but keeping, know, keeping that or being on watch. Like having a security man. You do not hope he attacks you, but he is there when the thief comes.

I like the idea of policies over regulations. So frameworks, you know, constructive ways of doing things, that could guide people on how to do it properly. Inhibiting what they can and cannot do. Of course there are certain times when you can do that. But as we are currently in the environmental phase of innovation, especially with AI, is very tantamount that we allow it to spread its wings for us to know what is possible and what is not.

In the African context, COVID really, for example, we had the use of autonomous drones that were delivering COVID shots. They were delivering PPP's. We had trackers that were used to identify hotspots of COVID and be able to design responses for them.

We had issues of flooding in Ghana, and use AI to identify roads to help relief get to victims of dam spillage in 2023.

We have done a lot of work around public health and coalition of healthy-using mobile apps. All of these things have been possible through innovation. I think innovation and regulation should be teammates rather than trying to be competitors of who is right and who is superior.

I think we should collaborate more and you know, innovation should not be an after-thought. Regulation shouldn't be an after-thought. But rather we can build these frameworks into innovation pathways and into our regulatory pipelines. Thank you very much.

>> MODERATOR FIUMARELLI: thank you, James. Due to time constraint we are reaching the end of our session. You each have one minute to answer. How can we successfully examples of AI applications, right and international frameworks, we were talking inform that balance strategy, innovation, employment as an issue of societal impact such as displacement and critical needs like healthcare industry, automation.

So may we start with Paola? From your experience with UNESCO. Yes?

>> PAOLA GALVEZ: The question was very long, I will do a wrap-up.

First, think of local needs. What are the regulations we have in place and how can we complement them. Sometimes I, I think this is a personal opinion, we need an umbrella regulation.

AI guidance must be mandatory, why? Because the country need to have a position. Why the country wants the AI to do and be for their citizens. What is their policy in terms of legal autonomous weapons. That is prohibition or not. Surveillance. Are we using AI for safety and security. But let's be mindful it can target people from immigration, or other communities that are vulnerable or minorities.

So it's very important we are using that. And that is taking a position. That means regulation, that is law.

In some other position, please, let's invest in capacity development. It is key in terms of using AI, we will never be able to leverage the technology if we don't help our citizens understand it and use it best. Thank you, this is very condensed. Happy to speak later.

>> MODERATOR FIUMARELLI: Thank, Natalie, if you want to make one-minute contribution, from the healthcare department, are the expert.

>> NATALIE TERCOVA: Of course, I will try to be brief. It depends on the specific case. We sometimes have discussions about what we should do in healthcare, but this is such a broad concept.

Patience, privacy or data protection of patients and minimizing the bias in algorithms when it comes to treatment and healthcare are from my perspective, non-negotiables. We have to take this into consideration when we talk about healthcare and diagnostic tools that can assist doctors with some critical conditions. This carries way higher risk, than AI tools or other technologies used for administrative scheduling system, for instance. How we set a timeline, set operations and stuff.

Again, it is so broad and we have to take into consideration the level of risks involved. So in light of this, I believe those high risk implications should undergo more rigorous review before they come into practice. While those low risk regulations can proceed under lighter regulatory requirements and then we can really grow and focus more on innovation to make faster and more effective. It's about balance, don't want to dive into more details but of course, I'm happy to talk about it more, we recently conducted robust research in AI.

Whether they use it for their own questions. For example, if they ask ChatGPT, oh I have this issue, this is bothering me, and if they trust what the AI is telling them. One is the usage, people can be just experimenting with the thing. Just overall excited about these opportunities but they are mindful that sometimes what it is recommended to them is not the best. So we have some very interesting insights and I'm happy to talk more about this. Also over coffee. Thank.

>> MODERATOR FIUMARELLI: Thank you, Natalie.

So yes, we have only one-hour session, so you can reach Natalie at coffee and continue that conversation. Going for Osei, do we have any question online? Maybe we have one question for the onsite. The first who can raise their hand can make it.

>> It's not really a question, it's a suggestion from someone talking about human rights being at the core of the conversation. I have a question for all of us to model, I think conceptional policy questions starts from how lack of trust between multistakeholder, the various multistakeholders. I agree, an example of argument A. The secretary of state for science and technology use tech companies or companies should be treated like states because this are investment, C-document.

Item B, a member argued should be strengthened, privacy of governance and not it show Hugh to make sure these services are taken proper rule within rule of law-based services and how do we ensure stakeholders privacy. So the conceptional in all these conversations, policy conversations is the lack of trust, I have noticed. But if you have any questions onsite, please raise your hand.

>> Yes, thank you so much. My name is (?) from Argentina. I have a quick question.

So I see, sorry, so when regulating AI or technology, we have different layers or aspects, we have the users, we have the developers, training the AI, we have the training of AI. In that sense I see users are already in the physical world punished by the law. But on the other side, should they be responsible for what they develop. Those who train the models should also be responsible for the effects it this has or not?

>> MODERATOR FIUMARELLI: Maybe, if some of the panelists want to answer the question. We will have the last question here to the right.

Okay.

Who wants to answer the question? Or we can go to the next.

>> Thank you, moderator.

Mine is on the topic of this discussion. If we really reach the stage of overregulation now with eye merging technologies, we have seen the regulation playing catch-up. It's usually technology  --  if we reached the stage of over regulation yet.

>> MODERATOR FIUMARELLI: Do you have a question as well? That will be the last panelist.

>> JAMES AMATTEY: I think I can answer this.

>> MODERATOR FIUMARELLI: Okay, James, answer and then drop off.

>> JAMES AMATTEY: There's a risk of innovation ahead of regulation, but it all boils down to AI literacy. And know, we  totally come to an understanding of what AI is and what it is not. For example, if you ask people what is AI, most will answer ChatGPT. But that's just one use case of AI.

It's not AI in of itself, right?

We need to be able to build literacy programs for regulators, for developers, for users.

For us to be able to have an understanding of what intersecting interests are. And then we can be able to then look at those intersecting interests and then be able to now tailor the solutions to our, to  --  the work for next year, literally for AI literacy and building literacy programs, proliferates the knowledge of what AI is and is not, what it should be allowed to do. Thank you very much. Happy to connect online.

My name is James. You can find me on LinkedIn.

>> MODERATOR FIUMARELLI: Thank you for your time, James and valuable contribution. Thank you, everyone for the engaging discussion. Sorry for those waiting in the queue, we are six minutes out of time. Today we explore the critical aspects of innovation, regulation.

A special thanks to our panelists here and valuable contributions and also the audience for your active participation. Thank you, and enjoy the rest of IGF.

>> OSEI MANU KAGYAH: Thank, online audience.

>> MODERATOR FIUMARELLI: We might take a photo in the front. Come here, everybody.