The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Welcome, everyone, to the Regulating AI and Emerging Risks for Children's Rights. Part of this work we have the steering committee for 2025 for the children with rights environment. We worked very closely with a number of governments around the world to develop new policy and regulatory frameworks, in particular the age appropriate design cold. Which if you haven't heard about we can tell you more later.
And the reason we are doing this is obviously we have seen in our work with kids that there has been basically a global problem in that techs that developed ignoring children and ignoring their established right and many people fought very, very hard for in the latest century and suddenly we have a new world order that has been tram polled upon and it's a problem because kids are using the same technology and living similar experiences and similar risks and similar harms. Luckily there was a global solution.
So global problems and global solutions. And we are working towards global norms for tech design, as I said, to see if those established rights are take taken into account in the digital environment.
So today, AI. What we see is something which is not initially new but which is super charging some of these harms and systemic problems that we have already been addressing.
And we are looking indeed for -- as I say, global standards and a way of addressing this. Luckily there is clearly a rising understanding and convergence of what we need to do and political will as in political will to address these issues. In particular for children based on their established rights.
How are we going to do it? Well I am very, very pleased to today be joined by a very distinguished panel of experts and also young people who can help us define some of the thing we need to take forward in particular as this even is an official preparatory even for the Paris AI Acts summit so we are look for tangible practical solutions to the issues we face.
I am going with that to hand over for some opening word to our first speaker, Nidhi. Nidhi is a 5Rights youth Amazon. 16 years old from Malaysia. There she is. Hello, Nidhi. Nidhi is a passionate advocate for children's rights, has represented children and their rights around the world, including recently at IGF recently.
So this is your second time at an IGF. Nidhi is also an author and hosted her own podcast and the first host of the 5Rights youth void podcast. Check that out. So, Nidhi, over to you, tell us what is happening and what needs to be done.
>> NIDHI RAMESH: Hello, everyone and thank you, Leanda. So my name is Nidhi Ramesh. I'm so happy to be here and to be able to share my views and opinions on how AI impacts AI all over the world, and what I can do to ensure its responsible use.
Beginning with my own experience, I think it's key to highlight that AI is on every platform and mobile application that we all use. When someone says AI, the first thing one might think of is about GenAI and apps like Co-pilot and Replica and many others. We live in a world where AI is everywhere. But most of us can't even tell when we are interacting with it.
Whether it's through social media algorithms or voice assistance or personal learning tools AI often work in the background shaping our decisions and experiences.
Many children don't realize that most of their interactions with the online world might actually be through various AI algorithms, making choices and recommendations and even decisions for them.
What is even more concerning is the misuse of AI by tech companies that put profit above children's safety and privacy from harmful and addictive content to collecting data without consent. Many AI systems operate without safeguard for young people.
Most children don't even know they are being exposed to these algorithms let alone how to protect themselves from virtual harm. That being said, I don't pain AI as a villain, with platforms and used responsibly it has incredible potential.
In education it's transforming how we learn. AI tools can provide resources making learning accessible and inclusive. For children with disabilities or in remote areas this is a real game changer. But while the dangers are real, so are the risks, and we can't afford to ignore them.
One major concern is the erosion of address creativity. AI is doing self-generated content. As someone who's had a podcast I know how hard it is to create something original.
My podcast is on children's awareness and on topics that are important to me. This also means I spend a lot of time writing scripts and recording audio and editing and more in order to publish a full episode.
It's a process that takes time and effort. Yet in this day and age, with the click of a button one can easily find AI algorithms that can generate any topic you give it and give it compelling description, and I-speech and mirth, et cetera. Use clips of your voice to perfectly imitate what you sound like when you give it this AI generated script.
So in a way these two AI programs can do what I spend hours and days working on in seconds. While that seem convene and undermines the value of hard work and creativity we put in. This isn't just about me, every artist and writer and musician who is at risk of being replaced by similar AI agents.
Arriving to -- I've also written and published books like Leanda said, but nowadays it's so easy for AI to write something similar in a simple command. Underwriting so many creators out there who watch want to share their work to people. Perhaps this more relevant to me as an individual but the risks of AI need to be addressed. Especially at the start. So what do we do about this? How do we make sure AI works for them and not against them. Tech and policy makers have a role to play.
First around AI governance especially around social media platforms like Instagram and Facebook and TikTok. These have privacy especially for children. They need to know what information is being used and what data is collected. Transparency isn't optional. It's essential.
We also need AI systems with children's well-being at their core. This means algorithms that explore safety and health while exploiting victims for profit. Tech companies must be hell accountable for the impassion their systems have on young minds. The 5Rights is working on designing and bringing to more regulations on this, which I'm sure will be discussed later as the panel continues. So I will leave it there. Thank you.
>> MODERATOR: Thank you very much, Nidhi. I should have let you go from the start, because you say it so much better than I ever could. I hope will you say with us, I'm sure the audience online and in the room will have questions and want to interaction with you afterward. But we will move tonight Dr. Jun Zhao joining us from Oxford. Hi, Jun. Great to see you.
So Dr. Jun Zhao is a senior researcher at the Oxford unit. Her research investigating the impact of algorithms on our every day lives especially when it comes to families and young children. She takes an approach understanding users and needs in order to design technologies that can make a real impact.
Jun currently lead the AI design lab and a major research grant examining the challenges of supporting children's digital agency in the age of AI. So Jun, tell us what is the research telling us about the impact of AI on children?
>> DR. JUN ZHAO: Well, thank you so much for the introduction, Leanda. That can I confirm everyone can hear me alright? That sounds already. So I got some slides prepared. I have slides prepared. I'm also happy to talk about the research.
Well thank you very much for inviting me to be here. I wish I could be there in person. But I'm very much with all of you guys in spirit.
It's shocking to hear Nidhi's talk and presentation and how much it resonates with our research evidence. As Nidhi said, AI is everywhere in children's lives, from the moment they were born and to the education systems they would be using at home or at school.
And we see a very similar number of wide adoption of these technologies if you look at the survey in any countries. And also as Nidhi said, AI, the rise of AI, children are rapidly embracing these new technologies. Our recent survey in the U.K. shows children are twice as likely to adopt the new technology than adults.
And the early survey that internet matters shows there's such a huge proportion of children in the U.K. are using GenAI I technologies to help with their school work.
So it's really exciting. And you know as Nidhi said there is a lot of great, exciting type, especially AI could be used to support children with their learning, children with special education needs who needs actual support of their social emotional management.
We also see some really exciting examples to see how AI could help children providing them with better health opportunities or early diagnosis with autism which is an issue in many countries in the world. But also we must be cautious about how all of this technologies may or may not have been designed in the children's best interest in mind.
So this is the slide that shows research last year where we did a systemic review of about 200 pieces of work from the human computer interaction research community which is a community that prides themselves in designing for humans in the heart of the design process.
And we tried to analyze how AI has been used in different kinds of application domains for children. It was quite interesting to see how education and health care has been the most dominant application areas as well as interestingly keeping children safe online.
And now we looked more closely into the range of a personal data that's been used to feed all of this algorithms. And we were quite surprised to see the diverse range of really sensitive data like genetic data, like behaviour data could be routinely used by all of these AI systems but not necessarily for consent or ascent from children or even necessary for the function of those applications.
And it's also interesting when we did a review of all the current AI principals out there last year. And we tried to do a mapping of all of this recommended ethical principles to the actual implementations. And as everyone can see in this diagram it's a very sparse I will populated table. So even bake principals like privacy, like safety, like meeting children's developmental needs are rarely considered comprehensively in all of these applications that are designed for children and often in a very critical area.
So this is quite concerning to see how principles are applied in experimental settings, and it's even more concerning when we see practices taking place in real world cases in commercial settings.
So this is quite an old report from 2007. With the rise of smart home devices, smart toys, researchers have identified very quickly serious implications and safety risks associated with cute cuddly bears.
But seven years on, many legislations have been adapted since then. But it was quite interesting to see, so has the rise of the variety of our devices and smart home devices. A recent study by us and as well as many other recent studies have shown that children's data can be collected by all of these devices, even when they are online or offline.
As well the researchers from recent privacy conference confessed, individual children probably won't experience consequences due to toys creating provide files about them. But nobody really knows that for sure.
So here is another example to see -- you know similar to the biases adults would be subject to, children can also be exposed to end their decisions by AI systems simply due to their race or social economic status but often probably there's much more lasting effects in critical situations such as criminal decision making.
Rapid development of AI is often associated with this rapid deployment but ironically not always sufficiently safeguarded in the process design.
For example when chat bot-like technologies were deployed by snap chat last year, some serious risks were immediately reported expose be children to inappropriate content and contact, even while they declared the only age of 13 or 15.
So another thing that is quite interesting in our research is we found that you a although a lot of risks like privacy and exploitation have been part of safety, the discussion of privacy has rarely been discussed. When we began the discussion of privacy we looked at tracking behaviours of over 1 million apps from Google play store.
One of the most shocking discovery are found from this study is the prefer lesson of data tracking resistance for cute ads used by children. Often very, very young children. How to begin their hand write, how to pop a balloon so they can develop their fine motor skills. This is such a direction violation of children's basic rights and their vulnerabilities.
So seven years on isn't our initial research, what has happened, GDPR happened. We repeated our study. It was quite interesting to see that tracking behaviour did not change immediately at enforcement of legislation. But what did happen is the App Store has made it extremely difficult for us to repeat and continue our data analysis. But what we haven't stopped is to continue asking the question why all of this tracking of children's data and how we can better protect them.
It's interesting to see a recent study published earlier this year where it provides even more firm evidence about the exploitation of children's data, the proportion of social media platform relies on children for their advertisement revenue.
So just like Nidhi said are these companies are not designed with children's best interest at heart but their market games.
Several recent studies have made a similar finding showing that recommendation systems can actively amplify and direct the children to harmful content. For example, here the study have shown that children identified with mental health issues could be more likely exposed to posts leading to more mental health risks.
Harmful content is prioritized because it's more attention grabbing, invoking stronger emotions and prolong children's engagement. Mar so many of these studies are actually conducted through simulations. Because researchers do not access to the APIs or the code of the algorithms. But what happens when we talk to the children directly and ask them about their experiences?
This is one of the studies that we conducted last year. And the message is consistent with many other research studies out there. Children found experience very passive and disrespectful. And empty them find it unfair that the systems can do this to their data and manipulate their experiences.
While such feelings of being exploited and disrespected can be hard to quantify, we must not negative how these feelings are fundamentally disrespectful of children's rights in many ways. And how the same algorithmic practices could cause different degrees of harm for children of different environmental stages or vulnerabilities.
And I will just leave the evidence discussions here for now and for other speakers. Because I think the fundamental -- we have lots of evidence for the fundamental phenomenon. But it would be interesting to hear how the recent AI Act could or could not provide the interaction that we need for children of this generation. Thank you very much, Leanda.
>> MODERATOR: Thank you so much, Jun. I'm going to step over here so I'm in frame. Thank you so much for that presentation of some of the overwhelming evidence.
And I think if you had a little bit longer you could have said an awful lot more. I would also like to point people to some of the research done by 5Rights I have some of the research and pathway research using avatars shows really clearly how algorithms drive children to very specific harms. And of course there's plenty of evidence from a number of court cases as well of children who
(Audio Difficulties)
I'm pleased to welcome a member of the -- Mr. Brando Benifei who was co-representative of the European co-dependent and the oversight group. So co-chair of child right group in the parliament. We absolutely delighted to have you here.
>> BRANDO BENIFEI: I'm really happy I can be here for this tune. I'm sorry due to the overlapping with another meeting, I have to attend because of the parliamentary programme. We have a good delegation here I will have to leave soon after my intervention.
Maybe if there's one question I can answer but otherwise I will let the panel continue am I want to thank 5 Right Foundation also for the useful contributions we are giving during the AI Act. The regional text from the European commission was unfortunately lacking completely the dimension of child protection it was not there at all.
So we had to bring it in with amendments from the European parliament with our drafting work and the negotiations that followed. So we have some specific dimension of child protection inside the AI Act. Not as much as we want, but there are also more general provisions that can be applied effectively if we want to apply them effectively on the cases that we just heard of.
That's why it's important -- and now the parliament in the new mandate that just starred just a few month ago, both confirmed the existence of the child intergroup, the children's rights intergroup. I will be the chair. It starts its work now. It's very important. Because it's an important to bringing to all the MPs from You will as the different perspectives and parliamentarians that want to work on children's right.
After we are approving the we will be going step-by-step for the application. It's crucial because some of the issues you have been talking about with the previous speakers are to be checked that way they applied. For example in February 2025 already we will see the full mandatoriness, the full application of the prohibitions. That it's a very important aspect of the AI Act. And among the prohibited users there is also emotional recognition in the study place. We wanted to avoid that to in fact enter into a form of pressure and intrusion on children in schools. So this is one aspect, for example.
But then also predicted policing can target minors from certain minorities will be prohibited and also we prohibit the indiscriminate use of AI powered biometric cameras in the live action in a way that will prevent forms of surveillance that can also infringe the privacy and the protection of children.
And we have for example prohibited also the facial scraping on the internet. So that is something that is used to in fact prepare Generative AI or chat box to commit some of the abuses that we have seen.
And we are trying to protect this data. But also apart from the prohibitions that will kick in soon, we have very important transparency provisions that will be quite important. Looking at the general Generative AI.
For example, we demand specific protocols by the Generative AI developers to contrast the capability of these systems to have this kind of inappropriate conversations that we have seen that has been exemplified earlier and the production of inappropriate content that can be offensive for children. This is something that need to be entrenched in the way the system is trained and its lived. And it need to be checked periodically.
But also, we want to label AI generated content. This is crucial to find another issue that was not very much touched on until now in this discussion. Which I think it's very important. This fighting cyberbullying, cyber mistreatment of children, which is a very important source of mental disorder, of attacking Mel.
And with Generative AI you can have a totally new level of extremely dangerous cyberbullying of all kind. And this is something we need to tackle by avoiding the production. But when the thing is there -- at least it needs to be clear that this is not true. It is fake. So people cannot be induced to think it a person is doing or saying thing that will make them feel ashamed and have Mel problems.
And also finally I want to underline that this work could make more examples about how AI interacts, but I want to focus that it interacts with the children services act but with the child legislation that we have been developing that has been provided by the European Union, and we think that the ecosystem need to learn together.
As I said I'm the specialist on the AI Act. But in fact you put that together with the digital services act and with child sexual abuse and you can build a proper framework of protect.
In fact we want to continue a global dialogue. We are working on. That I am doing that with different governments and parliaments so we bill a framework of action.
That's why it's very important to see society organizations that can build links that are not only between the governments but in Civil Society and I see the parliamentarians have to do their part. It's important we have the track but also delve in one of these discussions about the topic and we need to continue developing in this direction.
We hope we can give some
(Audio Difficulties)
But clearly we need to keep learning from each other and building together an opera Tuesday of acts and legislation soft and hard laws that can protect our children online. Thank you very much.
>> MODERATOR: Thank you very much, Brando, I know you have to leave. Do you have time for a quick question from the room? Okay does anyone have a bumping question for Mr. Brando Benifei. Question I have lots of questions but I will have to keep then.
>> QUESTION: I am from Nigeria. I want to talk about the child. The family is the fundamental of that child --
(Audio Difficulties)
When it comes to the internet. So don't you think there should be an awareness -- firstly educate the families as to what they should do. Limit children to some step before going to the next level of the tech giant that develop these AI integral --
(Audio Difficulties)
So what you do think the family can do. They are children.
(Audio Difficulties)
Before you go outside to find the solution of. This is my question.
>> BRANDO BENIFEI: Is it working? You can hear? Just to answer on this. I think it's a very important topic. Because we need families to be ready to do their part in this education. Because it's not only about laws, obviously. I concentrated on that, but it's also building a culture. This means you need to give the instruments to the adults to be able to have an informed conversation with their children.
So I don't think we will solve everything by giving instruments to the adult population. We need schools. We need the formal education targeted at children through the institutions but obviously if we have a more conscious population also of the older age, there is not digital nature, that need to be trained. Then they can also transmit to their children some basic foundational elements on how you can be healthy and protected and conscious and not be manipulated while you are using new technologies, AI, the internet.
So, yes, we also need the families to be onboard. And we cannot solve everything with. That but at the same time, without investing also in the families I think we are missing an important piece. I completely agree with you. Thank you.
>> MODERATOR: Thank you very much, Brando. We wish you the best of luck overseeing the AI Act implementation. And we will be telling you about what comes from this panel, which is relevant to that later. Our friend left the room but I would like to say that the European parent association was-in a lot of work done by the AIF. They have been big drivers of this. Now over to our next speaker, Dr. Ansgar Koene, regulatory lead at WEOG. I probably made your title longer.
So Ansgar, we are delighted to have as the team and vice chair of our board and an absolute expert in AI and work a lot on the technical standards that are needed. But among others the I have men takes and enforcement of the AI Act.
So, Ansgar, we will learn from you about the status quo about what we have to get this kind of regulation and also thing line the AI convention and the framework that came from the UN a few Monday ago. So there were a few things, the AI Act being one of the most powerful ones there. So a few word we would like to hear from you indeed, what is the status quo in terms of actually making this real. What is missing? Where do we go from here?
>> ANSGAR KOENE: Sure, same check as everyone else can you hear me? So, yes we are definitely in a very interesting period with the introduction of new but also mandatory safeguard from AI or putting on the table the expectation from the regulator saying we expect you to follow voluntary codes around responsible use of AI.
And certainly we have seen -- if we looking at the types of organizations that we work with that the introduction of these types of regulations has pushed forward the level of engagement, the little of resources also that are provided within organizations, be it public or private sector organizations to
(Audio Difficulties)
If we think especially about the way in which these types of regulations apply and have --
(Audio Difficulties)
With AI, there is a large challenge for the a lot of organizations. Similar to what we see in the platform space, that the organisations are often not quite aware to what extent, what they are doing actually impacts on children.
Similar as what we have seen with social media platforms and other online platforms that when they created the space they were building the space not with children in mind be they were building the space with adults in mind, even though in reality we know that a huge proportion of the users of these platforms are children, they not even conceive that this is something that they need to be building for.
And a similar challenge is in the AI space, especially as in the AI space, as it is moving to a model where we have creators of the core AI models, LLMs being a prime example of that, being separate from the deployers of the AI models that then integrate them into their systems, that there is a disconnection between those who have the capacity to actually understand and do something about compliance, also compliance with regard to aspects regarding children.
But they different from the ones that are directly Feying the users. And even often those that are directly facing the users are not sufficiently tracking and aware of who exactly their users are.
So if the AI Act asks for -- you know prohibiting a subliminal manipulation of vulnerable groups but the deployers and especially the providers are not even aware that young people as a vulnerable group are a user of the platform, then of course they are challenged in knowing whether or not AI subliminally is manipulating them or if they are having a negative impact on them, and they don't understand what does a negative impact on young people mean?
This reflects then on some work happens in the standardization space. Last year the IEEE published a standard which 5rights was a prime contributor to. The 2089 standard on age appropriate design. And one of the things that that standard actually asks for is that organizes as they are engaging in the design and development of AI systems, that they have people within that development process who are subject matter experts with regards to the impacts that these systems can have on children.
So that there is someone at least involved in the process that thinks about how could this kind of a system impact children? What are the potential challenges, the potential neglect impacts that could arise if someone under 18, let alone someone under 10, is using these kinds of systems?
However, the standardization space is still also very immature. It's a space that is still very much in flux in development, if we think, for instance, around the standards that are meant to provide the clear operational guidance on how to become compliant with the AI Act, all of those standards are still in development.
The European standard body are rushing to try to meet the deadline that the AI Act has set to be able to provide these standards. And because they are rushing, they are focusing at the high level horizontal level. There are very few contributors that can add to the understanding as to address the concerns regarding children.
5Rights are participating in the process but there are multiple standard. And it is highly challenging to be contributing to all of these at the same time to make sure the Rick management standard considers the risk to children at the same time the trustworthy considers what is accuracy and what is robustness to what children who might use an AI system actually mean.
So the technical space around how do we move from a high level intended outcome, which the regulations have specified into an organizational what do you need to do on the ground to make sure that the systems work to meet those requirements is still a space that need a lot of support. It need a lot of work.
And as I said there is -- even the core challenge that organizations need to be aware that they even need to consider how children might be impacted by these systems as they are deploying something, a new chat bot. As they are deploying AI as part of a system for targeting advertising. As they are using AI as part of something like a fitness app or something like that.
They are genuinely not building it with children in mind. So it is a space that is dynamic. It is a space that is moving in the right direction to the extent that it has been integrated in the AI Act, for instance as Brando mentioned.
However, because there are so many new things, new compliance activities, new thinking about what does responsible AI actually mean, all happening at the same time while there's also a huge rush to try to find new ways to actually get a return on investment on these, that there is a huge risk that the particular concerns around children will fall between the contracts if we do not raise enough awareness about this.
(Zoom attack)
>> MODERATOR: Thank you very much. And they will have our last intervention and engage in discussion if that is okay. This one is not working? Is it working?
Okay. Great. Thank you very much. We need to get down into the weeds to get the technical frameworks in place. Because otherwise companies can say thing like oh we didn't know we were exploiting children's vulnerabilities. We didn't know children were there. We are not designing for children.
Why do we need to cater for children? The reality of course -- I'm being a little provocative but the reality of course is many of the biggest companies at the very least are quite aware that children are a massive market.
They are targeting children as their current market and their future market. So of course it's a bit disingenuous but until we get all of that detail in place then that is a game we will be playing so absolutely critical. Thank you so much for that.
We have our last intervention online from Baroness -- he was from the U.K. and the architect of the code design code. He has been a long-term advocate of children's rights. And is currently working on an AI code which hopefully will feed into some of the thing we have been speaking about. So Barron he is Kedro, over to you.
(Audio Difficulties)
(Audio Difficulties)
(Zoom attack)
>> I regret not being there in person at the Internet Governance Forum today.
This session is an event for the AI conversation, and I'm delighted the conference will take place in Paris in February. As Nidhi and Jun will have shared it's crucial we develop Artificial Intelligence and automated assistance with one eye on how
(Audio Difficulties)
Children. The possibles are infinite. I was very moved when I saw an AI system that in realtime could monitor a preterm baby heartbeat without having to stick heavy instruments on their paper thin chest.
A fellow children's rights NGO recently launched an AI charged bot to support children's access to justice. Or a few months ago, I met a wonderful group of 14-year-old girls who taught sign language to classmates so they could all communicate with their deaf peers. There is no doubt that AI holds immense potential.
But like any technology AI must be designed with children in mind. And I do want to emphasize it's a design choice if recommended systems feed children alarming content promoting disorder or self-harm.
It's a design choice in AI powered chat bots encourage emotional attachments which maybe in some cases have led to children taking their own lives. It's a design choice if cynically some of those chat bots revive to see children through the creak of AI bots imitating their personalities, and retraumatising their families and friends and creating a loop in which self-harm or suicide has arise. As children point out to us repeatedly --
(Audio Difficulties).
>> Your host, I don't know if you can hear me, but if you allow me to share my screen again, I can restart the play back.
>> MODERATOR: I'm so sorry. It's either a choice between online people being able to see and hear and us being able to see and hear. Why don't we give that a moment. I think there were some questions. At least the ones from the room.
I'm not sure about the ones online at the moment. But let's come to that and see if we can get to the end of this convention in a minute. You had a question.?
>> QUESTION: With the youth I had the honour to work with 5Rights number 25. And I really appreciate what about we heard from Beeban and Ansgar. But I think I'm disillusioned because I think it was ten or 12 years ago when we really talked to tech companies about safety by design. I thought we had artificial Intel -- although we had Artificial Intelligence at that point in time, it was not at the hands of the children in a real way.
So I would have expected that this principal would be in the standards, would be in the hand of developers, to be adhered to and to take into consideration that children -- it was obvious when it was throughout all the developments. I would say, when the internet came up, it was not designed for children, so maybe we were then going behind that and saying okay now we have this idea of safety by design.
Have in mind that children probably will be users of the services of the devices and so on. And now we end up several years ago with AI and the same situation we had before with other technology, digital technology. Ansgar maybe you have an answer to that? Not to disillusion me.
>> ANSGAR KOENE: I fear my answer will not remove your disillusionment. The practice we are seeing is a rush to compete to bring thing to the market. And in that rush it remains the case that the so-called functional requirements, that is to say the thing that need to be there in order to reduce the time of output that they want to create, get the provenance, sort of the investment, and the so-called nonfunctional.
It's terrible, for requirements such as making sure that there will not be neglect consequences, especially to specific groups are marginalized in the design process. Unless there is a significant fang for behind it. Such as the risk of a huge fine.
That is why even though we are seen the discussions around responsible AI principal for many year, there was always a lack of investment to maybe get it implemented. There was often the case of technologists within the company saying this is something that we should be doing. But they were not being given the resources to actually do it. Until now, there is something like an AI Act where if you don't do it you are going to face fines. Now suddenly there is an investment in doing it.
>> MODERATOR: Is this okay? It was your voice. Okay. Maybe we will get Baroness. He is back. Wonderful, we can hear. I hope that is current, not from the beginning when we couldn't hear. But in the meantime do we have any questions or comments from the room?
Otherwise I will go to ones online. And I have to say I totally agree with you. It's taking far too long. And as I said before a little bit disingenuous, because we do know what the issues are, and we have known for a long time. Let's go over to Lena.
>> Thank you very much, and such good work that are you all doing --
>> Good morning, I regret not being able to be with you new person at the Internet Governance Forum today. This is for the AI Acts Summit -- and I'm delighted it will feed -- because it will take place in Paris in February. As Nidhi and Jun will have shared, it's crucial we develop Artificial Intelligence and other automated systems with one eye on how it will impact children...
(video playing - inaudible online)
>> QUESTION: Okay. I also have known the issues for a long time. And the only time the tech companies have changed is when they face fines for penalties or litigation. And I'm just curious. Because there's people from other countries in the room as well. And I'm wondering 5Rights has been doing some work globally. Are we seeing the same conclusions in terms of the ability of other countries --)
(Video playing - inaudible online)
Good morning, I regret not being able to be with you...
>> QUESTION: I was just curious to hear if there were others in the room who are also finding similar issues. Is there a sense from other countries that they also need to get in line and have some really robust regulation like we heard from the experience in Europe.
It's just an invitation for others online or in the room. The work that I do is also aligned with 5Rights with the council and tech and social cohesion, it's trying to regulate the thing that lead to these kinds of harms and also polarization.
>> MODERATOR: Thank you very much. I'm so sorry because I was only half listening and I think you have to tell me that again because I want to know. Does anyone else --
We also have online our speakers Jun and Nidhi. If you would like to come back on anything you have heard, you can put something in the Chatham and I will see it and bring you in. Otherwise is there anyone else in the room?
I have a question online. So I have a question online from Dorothy Gordon from UNESCO, who is asking how involved are consumer rights organisations in working to get major tech companies to stop abusing children's rights in this way. I believe we need consumers to avoid using dangerous product. So that's public awareness and almost boycotting consumer organizations did do other thing like submitting complaints and things. Ansgar, do you want to take that?
>> ANSGAR KOENE: I'm afraid the only part that consumer right organisations are doing in this space that I can really speak to, yes we do have -- at least in Europe when it comes to the standardization for the AI Act we do have participating by the consumer rights organisation in helping to make sure the consumer concerns are taken into consideration as the standards are being developed.
And this is a very valuable contribution by a nonindustry player.
(Audio Difficulties)
Asking for the activities being done regarding educating users, the impact that this type of technology and help them with an informed choice as to whether they will use tools. There are NGOs working on these types of things before.
Before Christmas and some activities around which tools may be spying on you, et cetera. But until we reach a particular subsection of the population who are generally already interested in this space, I imagine this a space where we also need support from the private sector to
(Audio Difficulties)
To help people better ups this. To have the resources to which the whole population opposed to
(Audio Difficulties)
People allocating for this type of information of. I think that is going to be a question.
>> MODERATOR: If I can just take --
(Audio Difficulties)
People online are going to answer.
(Audio Difficulties)
I'm going to jump into --
(Audio Difficulties)
Automated systems. So the AI is not too complicated to understand. But it is certainly not too complicated to regulate.
(Audio Difficulties)
AI is not different from previous technology,
(Audio Difficulties)
AI is not different from previous technologies the sail will happen if we do not act speed I will.
This is why over the past year building on global consensus and working hand in hand with global experts in the field we have developed an AI coat for children. Launching the code in the coming month will provide a clear and practical path forward for design, deploying and governing AI systems taking into account churn's rights and need. It's an important and necessary correction to the persistent failure to consider children and a violation blueprint for delivering on commitments to children on the digital come back and gore regular advances such as the AI Act.
We need from the outset how to build right and use into governance to AI systems. The code from Paris, hopefully, will be for anyone who adapts or employees the AI system for children. It's adaptable to all kind of AI systems and likens and raises questions for gaps and Ricks and leaves autonomy for mitigation mirrors and intended to support existing measures and provide a standard for those jurisdictions that are considering introducing new leg or regulation.
In the global digital come back, "all governments agree on the opportunity and risks of artificial intelligent systems on the right of individual." That's a quote. Children represent one third of users and have early use of technology and have rights, and they must be at the center of our discussions. I hope they didn't misquote that but can you gee with me because I agree with any of that.
I think putting children in the conversation maybe means in the last few minutes I would like to go back to Nidhi if that's okay with you. Nidhi, are you still with us?
>> NIDHI RAMESH: Yes, I am.
>> MODERATOR: Could you bring Nidhi up please. I don't think our tech people are listening to me again. Nidhi, if you are with us, I would love to hear your reflections. You talked to -- not only your peers and colleagues all the tile but also our big group of children ambassador within 5Rights.
What are the conclusions do you draw from this, and what do you think your colleagues would like to tell us going forward?
>> NIDHI RAMESH: Thank you, Leana. That's such an interesting question. So as 5Rights youth ambassadors --
>> MODERATOR: I can't hear you yet.
>> NIDHI RAMESH: Sorry. My mic should be on. Alright.
>> MODERATOR: Try again.
>> NIDHI RAMESH: Hello. Can you hear me now?
>> MODERATOR: Still not. AI will one day solve all of these problem, I am sure. Nidhi, can you hear me now?
>> NIDHI RAMESH: Yes. Can you hear me now?
>> MODERATOR: Yes.
>> NIDHI RAMESH: I will start again. Thank you very much, Leanda, that's an interesting question. And as 5Rights investigators, we often discuss the risks, especially more children and young people.
What we see as potential, obviously key concerns, that stand out to us. So one major issue is education. AI can make tasks like research and homework quicker. But it risks taking away from essential learning skills. As one miff peers put it. It's making homework easier but at what cost to our learning? And we worry a lot about losing creativity and critical thinking. Skills we will need later on in life.
Another significant concern is privacy. AI systems can analyze so much about us. Even from just a photo or a message. One ambassador shared how AI is amazing and how it can help us but it's also scary how much it knows about it us.
Many of us feel uncomfortable with how much information we are unknowingly sharing. Passenger I will when we are not informed about how it's being used like how I mentioned earlier during my first intervention.
We also talked about the psychological risks of AI. Systems designed for companionship, for example, might seem helpful but in long-term have a lot of consequences. As one of our ambassador said it's more than just about technology. It's about our values, relying on machines that mimic empathy can effect our real world social skills especially nor vulnerable young people.
And of course there's the growing threat of deep fakes of Marco, one of our youth ambassadors, say AI tools are developing and deep fakes are becoming scarier. So while there are tune, it's educational and ethical and privacy considered risks that concern us the most. And it's crucial that AI systems are designed to protect young people with a lot of safety guards that prioritize our rights and well-being.
>> MODERATOR: Thank you so much.
>> NIDHI RAMESH: Thank you.
>> MODERATOR: Thank you so much, Nidhi. It's always wonderful to hear from you and from your fellow youth ambassadors. I hope the coat will serve you, but we will get your direction feedback on it, of course, very soon.
We really hope that you will find some of the elements there to address over the things that you have brought up. We have 2 minutes to go. And we have had a very eventful session, I would say. Jun, if you are online, if you want to come back with any closing words.
>> DR. JUN ZHAO: Hi, Leanda, can you still hear me alright?
>> MODERATOR: We can.
>> DR. JUN ZHAO: Fabulous. What a fabulous session. I tried to come in a few times-- I think it was hard a few times when we were trying to manage the video intervention. We were trying to manage two things, the safety by design and parents in order to safeguard children.
I think I agree with Ansgar's point, we are definitely moving toward the right positive direction. But it's a really challenging domain.
I know there are a lot of GenAI companies that are embracing the safety by design and trying to integrate that really actively in their design and development process now. Which is really encouraging to see that, especially they are taking that perspective from children.
But it's very complex, because the risks are quite diverse. I agree with that Ansgar said, some of the companies may not be aware of some of the risks for children but I this some of them do.
But at the same time, there is a challenge, because the diverse risks, some of them may not seem to have direction impacts or immediate safety risks for children. You know, like some of the risks that Nidhi raised like exploitations and manipulations they may not seem to have immediate harm but they are harm nevertheless.
So it will be quite interesting to see in the next couple of years when the EU-AI Act and many other act come into place, how all of this understanding about various forms of risk and harms are going to fair out in the legislative enforcement and how we can all working to facilitate better awareness, better translation from policies into practical guidance. So we can create a better AI world for our children and our society as a whole.
And I think that's all I got to say, Leanda. I hope that way we can finish on a positive note and something exciting for us to look forward to in 2025.
>> MODERATOR: Indeed. There remain outstanding questions, but as you have said, there's still plenty going on. We do have 2025, lots of things that we can deliver on.
And I think I would just like to reference maybe at the end that in the UN framework, and we are here under the UN's umbrella governing AI for humanity, there was a very, very clear point which was that AI must not be experimenting on children. AI might be in some ways, some aspects.
We might be using it in new and novel ways. But you know we can innovate all we want but this is something where we know that our children are too precious. And grow up too fast and their education as you said, Nidhi, even impacting your education, we are talking about the generations and the future.
We must not be experimenting on children. And this is what we will take to the Paris summit with all of this input. And we hope that all online and in the room, you will come behind us and have a look at this code and see how it westbound entered, improved. So that it can be deliver on these issues for kids.
Thank you so much. I would like you all to join me in thanking our panelists for this very rich discussion. I'm very grateful for your patience in particular. Thank you so much.
(Applause).
>> NIDHI RAMESH: Thank you so much for such an amazing session, Leanda.
>> DR. JUN ZHAO: Thank you, everyone.