The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> YULIA MORENETS: Good afternoon, we will start our session with moderation. But I need the side of the room perspective so you can hear me and we can start. Good afternoon, we need to start our session. We were delayed in the room that has been taken by the previous session. I'm just waiting members of the seat. Good afternoon, I'm Yuliya Morenets, the founder of the Global Youth Movement and moderating the session online because I'm not on site today. Together with other people on‑site that will help this happen in the best.
So this session will explore one of the most pressing challenges of our times, protecting young people online while guarding rights and freedoms. Obviously with The European Union Digital Services Act and priorities outlined ‑‑ I think there is a problem with the sound.
With the services act and priorities outlined, Executive Vice President Hanna Virkinan, in letter, to advance governance policies takes place on critical importance. Drawing on insights from developments in Australia and ongoing discussion in Canada, our idea today is to discuss if we need a different regulation and that if the existing regulation can address issues like cyber bullying, mental health, (?) and platform accountability.
Once again, Yuliya Morenets, and I will be moderating. And we have guests who will be discussing, members of the European Parliament Delegation we would like to thank for accepting this invitation for being with us. We have Tsvetelina Penkova, head of Delegation. I don't know if you would like to ‑‑ yes Eszter Lakos, Fulvio Martusciello, Sylvia Sardone, Tobiasz Bochenski and Ivars Ijabs, and I hope Brandon to present. We have Pearse Odonohue with the DG Connect.
We have a number of young leaders that will help us understand three cases we prepared for the discussion. We are supposed to have Denna Absaro from the Greece IGF. By the way, we have a solid number of people online and following. We have a presence from the youth community on site, better Kinqui (?), Fatusar (?), Levi from Zambia and a number of others we hope will participate.
As we know Australia passed the world's first law banning under 16 years old from being present on social media. When all that, beginning of this year Canada introduced the Online Harms Act Enhanced Protection on social media services. Probably the very first question would like to bring to the members of the European Parliament about youth community, we do know that youth is quite often seen as a Silicon Valley for regulation, right. Will the European Union follow example of Australia or will take another route and lead in this area of protection kids and children online. But at the same time, balancing with digital rights.
I would like to give the floor to Head of Delegation from the European Parliament, the floor is yours.
>> TSVETELINA PENKOVA: Perfect, hope you can hear me. Thank you, Yuliya for the introduction. We are having a packed room in Riyadh. We want to hear the young people's perspective in the room. Around me I have my colleagues from the European Parliament and I will give them the floor to introduce themselves briefly, because we are all coming from different political groups. We are coming from different member states. Of course the national and political perspective does matter when we are having those debates. Because, as you know, a lot of the legislations we are passing, they are on a consensus basis at the European Parliament. But still it is important to hear who we are exactly.
And also we have with us the European Commission. As you say to the Silicon Valley of regulation is the European Union, probably the rules are starting from the European Commission. So that's why they would also have an active role in this debate.
So before we jump into the topic, because you have already very specifically presented that, I would just ask my colleagues to present themselves. And also a remark, because I know you are referring specifically to Brandon. He is joining us in a bit. He is just participating in another debate on AI, because AI seems to be ‑‑ is one of the main and key topics of this IGF. But as the main experts in the EU he will join us but don't worry you will miss the originator of the legislation.
Now from my left with introducing the European Parliament.
>> IVARS: Thank you I'm Ivars and work with IJABS.
>> ESZTER: I'm Ezster from EPP. Is research, energy and effort and deals with foreign policy.
>> FULVIO: I'm Fulvio Martusciello From Italy, member of European Parliament from 2014. Now am full member in the economies and head of Italian Delegation in EPP Group.
>> PEARSE O’DONOHUE: Good afternoon, I'm not a member of the European Parliament, I'm an official of the European Commission, Pearse O’Donohue with Future Networks, and my involvement is Next Generation Internet, particularly in this case the governance of the Internet, which is why I'm here at the IGF, thank you.
>> YULIA MORENETS: Perfect. Hope this mic is working. You like to say a few words when introducing yourselves.
>> Sure hello, good evening, thank you for being here for the on-site as well as online. My name is (?)Flat Ivanence, the Internet Society Youth Ambassador. During this session I hope I will present my personal opinion on the legislation that affects children and youth population, and I will pass the mic to my colleagues.
>> Good evening, everyone. Happy to be there with all you. My name is Fatasar, (?) Ambassador, engineer in energy and one of the members of the Africa IGF, thank you.
>> Hello, my name is Dana Kramer, a youth society ambassador and coordinator of UGF Canada and we have been collaborating a lot on online harms act with our parliamentarians. That was a keynote to speech within our Canada youth IGF in September. Have been fortunate to have lots of communication on that specific type of legislation.
>> Okay, thank you so much. My name is Peter Kinquy, representative for youth group in Liberia. We have been involving the phase of youth IGF space in the four countries, Liberia, Syria, New Guinea and (?).
>> Thank you. It is important to know who we speaking with. The question posed initially, I will start with remarks and ask my colleagues to from the parliament and commission to jump in. After you have heard the legislator's stance on all those matters, we would like to hear your feedback, or how do you see those topics developing or evolving, or what do you expect more from us.
I know you are the people who have the most impactful and important insights of what is working and what is not. So, as you know, in a lot of those digital policies and regulations we are working on, one of the main challenges we face at the European Parliament and commission is balance protection with digital rights. We want to foster innovation but don't want to limit it too much. This is going to be one of the most challenging aspects of how to do this, how to achieve it.
So in speech in July this year, the President of the European Commission, Madam von der Leyen underlined we need to work in order to tackle social media addiction and cyber bullying. So in a lot of legislations we have been doing in the last few years and upcoming in the European Parliament and commission we will take this into account.
I'm going to briefly give an idea about three specific legislation and examples that are either already finalized or in the process of finalizing them to set the ground for some specific conversations here with the audience and with all of you.
So the first one is the EU Proposal on Child Sexual Abuse and Exploitation. This was proposed by the European Commission as a regulation two years ago, like already two and a half in the spring of 2022. So it does provide some very specific proposals and mandates when it comes to online ‑‑ to detection of online sexual abuse related to minors and children. So at the moment the stand of this regulation is currently under the discussion between the European Parliament and the Council, so we still have a lot to say on that matter.
But as I said, some of the subjects are very sensitive, so that is why they are taking a while, so this is the first one we put on the floor. Brandon has just joined us, everyone was expecting you. With relation to AI, another topic we will discuss later on. The next file, which I am going to emphasize is digital services act. I'm sure a lot have heard of it and we have specific obligations on the platform to respect users. I'm not going to list all. But right of freedom of expression is there but right of protection for children, the right not to face discrimination, the right to protection of personal data and best interest of children in principle.
We are trying regulatory framework to specifically protect rights of all the users with special access sent, with special emphasis on minors and children. Last but not least the age verification without disclosure, this is important to not violate rights of children and minors so we are trying to touch upon every aspect that could help us prevent from harmful behaviour against people who might not be that well informed. I will stop here now because might take too much time and ask colleagues if you want the jump in.
>> PEARSE O’DONOHUE: I thought I wasn't going to speak first because thought the members of parliament would. I don't intend to speak to child abuse proposal only because it is in hands of the co ‑legislator. This is for parliamentarians to discuss with the council and the commission takes a back seat role at that stage.
But thank you for your very quick run-through of some of the key issues. I would like to focus a little on the Digital Services Act. We do like to learn from others and want to hear from youth community but also hopefully we can have some experiences to address this issue of protection of minors but creating environment in which younger people can effectively operate online in the environment, which will be their environment after I'm long gone. Protection of minors is a key enforcement priority for commission, European Parliament and of course for the ‑‑ it doesn't work. I have been in this room several times and cut out regularly, so we will just keep going.
But with that consensus, however, there was quite a discussion about how to do things. Already the chair has given you a quick list of what the DSA does in terms of high level of privacy and bands, what they should and should not do, responsibilities with regard to flagging illegal content and banning dark patterns
We have created a situation where online platforms have to take responsibility, assess risks that arise from design and use of their services. That is increasingly what we will see across all of the range of new technology but also new platform services as they come online. That is a way of not overregulation through principles that are agreed and, in many cases, corrected and drafted by the European Parliament, which if they are not respected, then there is enforcement action, a mandate reaction that steps in.
In this case in particular they have to look carefully with risks that affect children, the physical and mental well‑being of the users. That includes foreseeable negative effects, not those that have already occurred so forward‑looking and mental and physical well‑being of this particularly vulnerable group, which we have a duty to protect.
When those risks are identified, the platforms have to put in effective mitigation measures. So we've already started implementation. I've got a long list that will take a quarter of an hour to read the entire list of the measures but I won't. We have started for investigations which ones they reach a certain maturity are then published. The early stages are not because of the secrecy of instruction, but it is our intention under the DSA to move as quickly as possible to move to the publication and the information, the transparency of those proceedings, because in some cases it is in the interest of platforms if they can show they have rapidly addressed a problem, but also to the community at‑large, this is a function or feature identified as just not acceptable and therefore this is part of their forward‑looking work.
If one of your competitors has been told to stop a practice you should make sure that is not acceptable on your platform. That is another way of reinforcing it. We have open cases against TikTok, two cases, one against Facebook, one against Instagram. We have expressed our doubts about the way the programmes assure themselves of age of user, so age verification.ly come back in a moment, is essential. We have gone to further proceedings against TikTok for example on what is called the Light Rewards Programme, which could aggravate addictive character of their service. TikTok is committed to permanently withdrawing the programme from EU, commitments we have made legal will I binding.
I just opened a parentheses but would like to here from our representatives. Is it acceptable it is only stopped in the European Union? Is there a different threshold for protection of children in other regions? I wouldn't think so. We don't want to impose anything on anyone but this is where cooperation, exchange and learning lessons from one another is very important.
Maybe I'll stop there, but there is another area but I won't touch on the (?) But also provisions with regard to age‑inappropriate content. In particular pornographic content, which can have a very significant affect. I'm not talking about pedo pornography, which is quite simply, illegal. There is no question. What we are talking about here is facility with which any user without the proper safeguards can access what is considered to be legal, which is perhaps available to and who knows appropriate for adults but is certainly in our view not appropriate for minors and children. That is there for another element which is addressed in the DSA.
So I'll just stop here. We are moving on to guidelines under protection of minors. Under Article 28 of DSA, one of many areas we introduce these and have discussions with European Parliament. If they are not working we will have to move to stronger measures. I will stop there, sorry if I've been too long.
>> YULIA MORENETS: No, perfect. Thanks in terms of outlining few more parts of legislations or what is in the my opinion line. I would also be curator to go back to our youth panel and hear what they think about the application we have in place, is it understandable, usable? Did you want to take the floor?
>> Yeah, I can add one topic. I don't want to take too much time. I arrived late because I arrived, talking about the same topic so it is under the spotlight, this discussion on protection of minors, empowerment of children presence in the digital space. One thing on the AI Act because was already attached ‑‑ how interacts with child sexual abuse, et cetera. I want to highlight one point, put it on the table. Which is fight against cyber bullying, which is crucial. That is where the AI Act can give further support. Because it is, for sure, we use the digital services act to act against materials that can provoke instances of cyber bullying but we also have the issue of material in a more difficult way can be identified as offensive or violent, which any way can provoke mental health issues and cyber bullying in a more subtle way, by showing people doing things or saying things they could be ashamed of, in a way that is very specific to that situation.
That is difficult to catch with the existing norms, so that is why I think it is important to underline that the AI Act will support it broadly. I see political groups here, different political KL groups that support it. Our work on AI. Because on this we have given more transparency that can be used to prevent cyber bullying.
I give you the example and I stop ‑‑ if you produce deep fake that shows a person, a children, a minor that is doing or seeing things that can be mentally damaging for them, because they are not doing these things, they are not saying these things, they are shown, they are sending these material around, this can be dealt with with the digital services act, but not necessarily. It depends.
So it is very important we have basic transparency so a deep fake of this kind, we dis-encourage generative AI systems to develop some kind of materials that can be offensive but also we want that to be labeled. The so‑called watermarking, so people can say okay, this is fake. This is not real, then it can be removed, can be treated.
But we also say from the beginning, this is not real, and it can be helpful to avoid forms of mental health problems, of cyber bullying, of offensive material in different ways. So this is another safeguard, the AI Act puts in place that I think it is important to underline in the context of the generative AI that has created new challenges we need to tackle, thank you.
>> DANA KRAMER: Hello, Dana Kramer. Thank you for including youth in this discussion, especially from EU, because of youth from other areas outside your jurisdiction. With Brussels Effect, it is a power to have and requires global consultative efforts, so I want to extend that, thanks.
With that said, probably with some of these policies and putting on, deliberating on, they require immense levels of reflection of how they would impact the rest of the world. For instance, in talking about access to porn, pornographic material for minors. In Canada we actually had a bill that came due, proposed in our senate and then moved to different chambers, that would limit adolescents from being able to access porn. However, there were concerns due to age verification it would have privacy implications for understanding what an age for somebody was. As well if certain business models from an infrastructure perspective were built in having cached content at exchanges or delivery, that caching could result in different businesses out of fear of regulation pulling out of our market. Netflix is a really good example of this.
So for example for certain policies to take that on, if the Brussels effect hit us in Canada, as an example, and we had to create different legislation for that, we could legitimately see our digital economy shrink because of poor implementation. This global effort is important for ensuring such issues do not arise for third countries, just in an effort to have positive business and economic relations with Europe.
I also want to touch on the issue of child sexual exploitation online. In Canada, of course, we have had the online harms act that was mentioned by the opener. We actually, just in the past week, have separated it now in two acts for political reasons. That act was specifically designed to have a child safety element and also to a hate speech element, both of those, because online harms through a four‑year consultative effort, so lots of engagement for it was seen that we needed to apply in Canada what we call a GBA‑plus analysis and plus was added to recognise groups that would have intersectionalities, broadly understood that stratification levels, gender, racism, ablism is another, sexuality, LGBTQ+ another that in those intersections could cause that a youth who might receive some type of exploitation online would then have that expanded, so hate had to be included. I wanted to take this, intersectionality, how legislation could impact a young person.
For example, a young white man is going to have a very different experience than a black Muslim girl on the Internet, thank you.
>> Thank you. This is a pleasure to have you all here. This is extremely important. First of all, dealing with youth and children as a possible identity group. There is a problematic dimension because we are growing up and the current generation which is growing up now is, of course, digitally much more skillful, much more native. We actually expect from the next generations that they are as digital natives they are more skillful and doing things my generation ‑‑ I'm 52 ‑‑ I'm just not used to. That is why we should look at this issue of regulation from the perspective of learning. And this applies not just to the digital sphere but also the physical sphere because there are strong parallels. Because in all digital legislation those things that are prohibited in the real life, like sexual exploitation, bullying, so on, so forth, they should also be banned or prohibited in the digital life.
If we look at learning dimension, I think this is really an important thing to create safe learning environment for our younger generation and children because, of course, we learn, actually, by making mistakes, but that is why we prevent children from making very big mistakes. That is why in the real life in physical life we create safe learning environment for our children. That is why I think also when we are thinking about how to regulate the digital sphere, the possibility to learn must be there. That's why the EU always runs the risk to over-regulate things. This is in many ways hampering our digital development, compared with some other agents.
In that sense, I think to keep the learning dimension also safe for youngsters who will be much more advanced in digital sphere because they are already the next generation when dealing with AI and dealing with all the possible things like Internet of Things and so‑and‑so forth. But at the very basis, we have to really solve those issues that have already been mentioned as the basic norms like cyber bullying and computer violence, exposure to hate speech, violent content and self‑harm or suicide, which we all know is a big issue in many countries and extremism, terrorism, things like that. At the same time we have to keep in mind that we expect from the next generations that they will be digitally much more skillful and advanced as we are. Thank you very much.
>> We have a lot of questions online. My apologies for the headset. We were not expecting to be ‑‑
>> I didn't hear you quite well. I will ‑‑ the floor.
>> YULIA MORENETS: Apologies. We have a solid participation online, 30‑plus people. We have one of the questions that just came able the DSA, because that was the discussion right now. About the age verification. It is hardly checked and is bringing the question. Maybe she would like to take the floor online remotely and ask her question. Can we give the mic to online, allow Katrine to ask her question. If it doesn't work the question should the platforms be forced by DSA and others to have a mandatory initial age verification that is, I think, open to all participants.
>> Thank you. Hearing the inputs with regarding before I can go into more detail, but try to be brief. Some of providers of pornographic content have been designated as very large online platforms. Now we have begun inquiry with specific measures to assess diligently but also to effectively mitigate risks relating to protection of minors. That, obviously, initially starts with age verification. Because we are talking in many cases about age‑inappropriate content. We were, in particular, interested in details and age verification. We have responses and are looking to take effective enforcement action. We haven't had the legal means until now but coming to that point because we don't want to exclude children from positives of the Internet but do want to protect them from this age‑inappropriate. So specifically, age verification is a critical component. Yes, we have powers under the DSA to impose it as a protection.
Now what we are doing, as well as insisting on the very large online platforms enforcing it, as we ourselves come forward with the member states, with a temporary solution, which we will then finalize once the European Union digital identity is fully functioning. But in the meantime, we will have a privacy preserving interoperable solution to age verification. I'm not talking about principles or piece of paper but a functioning piece of software which will be obligatory if they do not have similar ‑‑ sorry, their own mechanisms of equal effectiveness. So that is the immediate way forward for age verification. As I said, in the long run, we have European Union digital identity wallet, a way of ensuring on the basis of approved, independently certified systems that the person is the age they say they are, and need to have, in order to access these platforms, thank you.
>> YULIA MORENETS: Thank you. Julie, pass to you and to the audience if they want to pose questions to the panel. I don't think we can hear you. You are on mute.
>> Sorry yes, Yuliya, thank you for that. I was saying you never know how it goes online but always access and then ‑‑ we prepared three cases for you to discuss. The cases came from the youth community. I think we will just take one. I would like to return the situation. You know, we spoke a lot about how to protect minors online. Now the situation following, let's see the case. That is about the balance in between the freedoms and the freedom of expression and harm online, right. We have the young Blogger here to us. He is reporting to use daily on social media platform. Is his regular actually for reporting use. He always tries to be natural ‑‑ as natural as possible and always like check his -- you know, and use his reporting for misinformation. However, the platform ‑‑ the platform tends to block him regularly.
So the question is, that is actually the right and the obligation which is given to the platforms, right, by the regulation. But at the same time, that breach his freedom of expression. I would like to turn now -- we have Vlad in the room. What is your opinion? Vlad is from youth community. Then open floor to members of the European Parliament.
>> VLAD: Yeah, thank you. Overall, I would say that it is quite disturbing to have any kind of regulation of the child regulation. I have experience. I'm from Russia. All the regulations against Civil Society or any member of community starts with regulation in this children protection field, I would say. It then finishes up with the restrictions against all the members of the society. And my question is here like why we do not ‑‑ why we no longer ability in self‑regulations that already exist on platforms. Because many of them, like you brought this example, of marking the messages that's like AI‑created comment in form of community notes. This information on the platform can be marked already by the platform itself, or community members can add notes to this. So you will note this misleading information.
But I think it is kind of a very easy decision just to prevent platforms from some sort of activities, like (?) them specific rules. This is a way of solving the problem, just to restrict them from doing something. I think there are many other approaches that can be used. For example, dialogue and influence of the platforms to behave in a more meaningful and respectful way, supporting youth and their activities and answering these question that was raised.
Of course, platforms should not prevent younger people from posting anything on the platform if it is not violating the rules of the platform. If it does, general rule should apply to these Bloggers as well.
If I could add on to something, in terms of regulation for self‑regulations, at the Canadian IGF for our NRI there was actually comment about what if we could have regulations for harmful content with AI that platforms have to develop out AI‑blockers. Similar to advertise blockers, what if there could be an invention for AI blocker, that we could allow that personal capacity to be able to make decisions about what to view online and allow that freedom of expression.
But that would need to be invented still. I know AI is still a very much like in the beginning phases, at least generative approach of how fast it is expanding. But I wanted to add onto that. Because there was an excellent point we had in our NRI I think would be helpful to bring to international conversations for AI blockers as potential tool regulating platforms but allowing personal freedom of expression, users can then have in the space.
>> Can I make an example, however. Because you mentioned the AI blockers, yeah, that is a good idea. I think we should work on that. Like we do with the ad blockers. We rely on rules. They are forbidden advertising in Europe now we have regarding paid advertising online reaching children or certain political ‑‑ effected ongoing implementation of regulation on these. Just to say we can combine dimensions in the sense that I don't see the contra position.
You can have AI blockers, I like the idea. That is an instrument of freedom. But with the Ad blockers already existing, we do regulate their advertising space, so I think we can do both. In fact, just to comment on what you were saying, because this was a lot of debate when working on the AI Act, should we let the platforms, I mean, do their best and encourage them et cetera.
This is what is already happening. That is good, the ethical dimension of developing AI tools in the space, but we do not want ‑‑ at least the thought in Europe. It is all based on the goodwill of the onus of a few powerful individuals that do this. Either it is recommended, or they can ignore it or it is a law. We think on some aspects we need the law. On others we leave with the soft. Because soft regulation nations, you look at the AI Act. A lot are almost not regulated by the AI Act. Because the AI Act concentrates on hi‑risk applications and transparency looking at the generative AI.
But a lot of AI almost has nothing to be applied on them by the AI Act. Only the general principles idea, the idea of an ethical approach to AI that, in fact, the legislation pushes through. But when we deal with more sensitive areas, we do not want to wait for ‑‑ I mean, the CEOs to be good based on democratic values. The issue was center around AI Act, but digital services act, et cetera.
We think we need to balance that. If you only go for the voluntary actions, you are in an uneven space, where probably the most powerful ones can do some things, also for reputations. Reputation issues. And others can have problems. But anyway, this is a contribution to that effect.
>> Just quickly, I don't know if I have time to speak on. For us in Africa, in the global context. For the record, my name is Pedeken from Liberia. I feel we as youth around the world will be looking at youth‑centric Internet revelation. That means or that looks at issues that is youth‑friendly, because tomorrow is the youth that will lead issue of Internet governance, the issue of cybersecurity, data protection.
A very fantastic example is also ensure there is a balance in regulation and innovation. That speaks to the fact some regulation should consider the level of regulation that does not harm the youth. Because the youth is just not on the list of more stakeholders. It is a stakeholder in the process.
So we will want to seek regulation that are youth‑centric, ensuring youth‑friendly regulation that ensures the right of youth in terms of what the issue of the blogger who is trying to present issues in his content. His country context. He's being blocked.
Then what profits the world when some other group are being denied. What do we achieve if we cannot have a holistic approach considering every community member that is needed in the space. That is my thoughts.
And I feel we ‑‑ if I'm saying we, I'm speaking on behalf of youth. I'm part of the youth system. We want to see something that involves all. Not only on some country or policy. Some of our country, we do not have policies in Africa. Our policies are still in draft. That is key issue. So if it is must be completed, it need to have youth voices in addition to validation process.
I'm sure in your -- most policies are already finished or in modifications and separate ones have been brought out to ensure direct controls. That is my point.
>> Perfect, thank you.
>> You want to take the floor, because we are running out of time. I would insist giving floor to audience if they want to pose one question. Think of questions. I'm giving the floor now, thank you.
>> Thank you, I just wanted to add something about the blocking of some blogs, and also about the abusive content on some social media in terms of for-children.
(Captioning services will end in three minutes)
>> Recently that happened to me on one of my page on LinkedIn, where I was combatting this information on education, and the AI block it because say going against the community while it was not. So I think with while we are designing policy and regulation, also we should not fully just rely on AI to combat like harmful content on the Internet. Sometimes you see a content, you know that this content goes against what receive after you receive message saying, we have received your report but doesn't go against community. Maybe the language that was used in that content was not like English or French language --
(Captioning services will end in two minutes)
>> -- and go while was -- so I think naturally for all content for all social media and for all blocks that are posting things, when people are report, that this goes against community. When AI say it is not, and the person up in that, it should be mandatory for a person, a human being, to see that content, if this is going against community or not. But we should not fully rely on AI and we should ‑‑ that will add more transparency on content regulation, thank you.
>> Perfect. Another insightful comment here from the room. Now I'm looking at our audience. If anyone wants to ask a question or give a comment to what we have heard, perfect. Thank you. May I have mics, at least one of them.
>> Good morning. Walter. Thank you for hosting this panel. I think it is nice to be involved as youth, even though I would not consider myself youth but apparently am here until 35 years. I think it is important to consider the effect on legislation on youth in that sense.