The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MELODY MUSONI: Hi, Dr. Sabine, everyone.
A very good morning, everyone. Can you hear me?
Okay. Good morning, everyone. We'll come to our panel discussion where we are going to be talking about A Global South perspective on AI governance.
My name is Melody Musoni. I'm a digital policy officer at the European Centre for the Development of Policy Management, and I am going to be moderating this session together with my co‑moderator, Dr. Sabine Witting, a German lawyer and assistant professor in the Netherlands. She's also the co‑founder of Technicality, a firm specializing in human rights and digital technologies, and the purpose of our panel discussion is basically to have a conversation around the global approaches to AI governance and especially the emerging approaches that we see from the Global South.
And I think this conversation and this discussion is quite critical, as we are at a point where we are setting international standards on AI governance, and these standards tend to be set by the Global North countries, but, as of late, we have seen Global South countries pushing back and also demanding for fair and equal representation of their cultural and social norms and values when it comes to developing of AI frameworks.
So there's also a question of whether it's possible to even have an international framework on AI governance that is representative and reflective of all the different diverse cultures that we have and the differently political and social contexts for different countries.
I guess one of the outcomes that came from the summit of the future, when countries are making commitment to enhance international governance of AI for the benefit of humanity, and I will quote from the Global Digital Compact, it says from a balanced and risk‑based approach to the governance with AI with the full representation of all countries, especially developing countries and the meaningful participation of all stakeholders.
So I think that the future of AI governance is also being shaped by the geopolitics and the geopolitical super powers that we have, and we also have to start thinking about the implication of that and how the Global South countries approach the question of Iowa governance and, of course, the governance of AI will likely be shaped by aspirations of AI sovereignty or tech sovereignty and very little to do with the promotion and protection of human rights.
So in this panel discussion, we're going to try answer three policy questions. The first one is: What regulatory approaches to AI are being adopted by the Global South, and do these approaches advance the protection of human rights?
And the second policy question is: What challenges are being faced by the Global South in developing their AI governance framework?
And the last question is: What are the implications of different regulatory approaches on AI development and deployment in the Global South?
That being said, I am joined on this panel by a panel of brilliant experts who will be representing different regions. We have a colleague representing Africa, China, Asia Pacific, and Europe.
I will talk with those sitting with us today. We have Jenny Domino, Private Sector, Asia Pacific Group, a nonresident fellow at the digital forensic research lab and a case and policy officer at the oversight board at Meta where she oversees content policy development concerning elections, protests, and democratic processes. She's also completing her Ph.D. at Harvard Law School on the global governance of technology.
Welcome, Jenny.
>> JENNY DOMINO: Thank you.
>> MELODY MUSONI: And then online, we're joined by Lufuno Tshikalange, Private Sector, African Group, an expert in cyberlaw and consulting, a firm specializing in data privacy and content management in South Africa. She has over 10 years of multidisciplinary experience in the (?) Sector and is one of top 10 women in (?).
We're joined by Gianclaudio Malgieri, Civil Society, Western European and Others Group. He's also the core director (?) And the managing (?).
Welcome, panellists, I'm going to jump into our discussion. I'm going to start with advocate Lufuno. What do you see as some of the changes that African countries are experiencing when they are developing their AI frameworks as well?
Again, go ahead.
>> LUFUNO TSHIKALANGE: Thank you for having us today.
In Africa, we (?) Have Artificial Intelligence Strategy which has been developed this year, 2024, and it was adopted by the African Union Executive Council in August 2024, and currently, we're participating to contribute to the strategy plan for 2030.
So this strategy is seeking to ensure that we have appropriate establishment of AI governance and regulations and to ensure that we accelerate the AI adoption in key sectors like agriculture and health. The Digital Transformation Strategy that we adopted earlier also.
It is also promoting the creation of development for an AI system and also to deal with issues of skills and talent development, which is not just AI skill shortage but ICT across the board. We have a shortage of skills, and it is also promoting grass roots research and innovation, particularly in AI, which is important for us and a way to participate meaningfully in the digital economy and more importantly in the AI economy.
The important part is the implementation of this strategy is going to be based on ethical principles which respect human rights and diversity and also ensuring that we have appropriate technical standards for AI safety and security.
So the approach that this AI strategy has taken is a multi‑pronged approach emphasising that the regulatory framework is flexible, agile, adaptable context, and risk‑based because it is important that we do not do a copy and paste, though the development of this strategy did land from the EU, the OECD, and other regions, but the intention is to make sure that the strategy is responding to our own unique challenges, importantly, the inequality, poverty, and unemployment.
It's also managing the approach and that it is collaborative.
The strategy goal is to promote human‑centric approach, which is very important that the technology is developed as a tool to assist our developmental goals, not to exploit the people or violate people's human rights. And, at the same time, it is also very strong on Security by Design and ensuring there is accountability and transparency in both the design and deployment of AI systems.
The strategy highlights the needs to make sure that mechanisms throughout the AI lifecycle mitigate potential harms from AI technologies while we force responsible development and use of AI across Africa.
It is important to notice I've mentioned earlier that the AI strategy is not a standalone, but it is part of the broader Digital Transformation Strategy. So even the priorities that are in the strategy are derived from the Digital Transformation Strategy, which is also informed by the agenda 2063. So the agenda is not to have an AI strategy for the sake of it but to have an AI strategy that will respond to our unique challenges, stimulating our economy and promoting the integration that is a target for the agenda 2063 and ensuring that we generate exclusive economy growth and stimulating job creation and that is the holistic approach to digital revolution for socio‑economic development that meets the needs of Africans.
Other important‑based land policy frameworks include the EU (?) On cybersecurity and personal data protection, which is commonly known as the (?) Convention. The current challenge that we have is the Malibu (phonetic) Convention came to effect in June of last year but is yet to be implemented.
We also a have the EU data policy framework that has expanded framework, in terms of how data ‑‑ effective data governess can be achieved even as we implement emerging technologies like the AI.
So we have priorities and guiding principles that are provided by the strategy, which is people‑centred and human rights (?). It's important that as we ‑‑
(Speaker video is freezing)
>> LUFUNO TSHIKALANGE: ‑‑ and different constitutions in states in Africa. Peace and prosperity is also one of the principles that will be governing this strategy. Inclusion and diversity, as we said, that we are struggling with triple challenges, one of which is inequality, exacerbated by a lot of digital divide, information divide, infrastructure divides, and a number of divides that makes the technological development (?) And we compare ourselves with other regions, and it has (?) As one of the principles. Co‑participation or emerging technologies cannot be done in silo. We have to do it in collaboration with each other.
So we say that the AI governance conversation is important to us because of the seven aspirations that we need to achieve, according to agenda 2063 and the SDGs that correspond with the same.
Our key concept, however, is the misuse of intellectual property and the misuse of information threatening our societal democracy ‑‑ (audio is distorted) ‑‑ and we also consider barriers that may make AI adoption a bit of a challenge, including inadequate Internet access, insufficient data for training models and/or structured data, limiting computing resources, and us not having sufficient skills.
So there is that we have identified and will be addressed fully in the implementation plan includes developmental risks, system‑level risks like the biasness that may come with the AI systems; structural risks like your automation, which might increase disparities that we have in inequalities; cultural and societal risks and culture as we advance into the world of technologies. So we have strategies that will help us ensure that (?) Our triple challenge and other challenges as identified in the agenda 2063, and we are now creating more challenges for ourselves.
We have examples that we'll be looking at that can help us accelerate the implementation and adoption of AI within the African region (?) ‑‑
(Audio is cutting in and out)
>> LUFUNO TSHIKALANGE: ‑‑ in the space having their own (?) To identify objectives within their own countries.
(Speaker video is freezing)
>> MELODY MUSONI: Should we continue?
Okay. It seems like we have lost Advocate Lufuno, but from what she was sharing, she was giving an overview of what is happening in Africa, particularly with the continental AI strategy and how it connects with agenda 2063 and the Digital Transformation Strategy, as well as drawing examples of some of the African countries that are actually leading on developing their own national AI frameworks.
So I guess I am going to come to you, Jenny, to talk more about the case of Asia.
I remember last year, at the IGF, we were in Kyoto, and Japan was sharing its role in the shaping of the international frameworks on AI through the Hiroshima AI Process and how it shaped the G7 principles.
And now we're here in Saudi Arabia, and they also developed the Riyadh Charter on AI Principles for the Islamic world, which is aimed at aligning Iowa frameworks with Islamic values and principles.
As someone who has been doing work on AI governance, do you see emerging of different approaches on AI governance? And do you see these approaches promoting or upholding human rights, in your perspective?
>> JENNY DOMINO: Can you hear me? Yeah?
First of all, good morning, everyone. I'm happy to be here. Just to qualify that I'm speaking in my personal capacity, and my views don't necessarily reflect the views of the organisations I represent.
To answer your question, I echo what Advocate Lufuno already mentioned and discussed.
In Asia, by in large, we see more of a wait‑and‑see approach so we don't have anything that comes close to the EU AI Act, in terms of hard law and regulation.
So there are draft legislation initiatives.
There's the (?) Guidance, so more on best practice approach or soft law approach rather than hard law we see so far in Asia, draft laws in Japan, Korea, Thailand, and other countries.
And what I see, as a common thread, among these various initiatives is it has overlaps with the EU AI Act, in terms of risk, limited risk, minimal risk, up to high risk, there's a lack of centring of human rights, and many human rights groups have actually called this out, including in the EU AI Act, that the focus on risk tends to eclipse the focus on rights.
And I think there's something here we can learn from platform governance, from governing social media platforms.
We have seen how emerging technology ‑‑ when I speak about social media from more than a decade ago, there was a lot of excitement, a lot of hype. And the lack of centring of human rights actually led to a lot of human rights violations. The UN fact‑finding mission in Myanmar many years ago, in 2018, identified Facebook being used by (?) And officials to execute violence against ethnic minority.
This is just to give you an example of how if we don't centre human rights when we talk about AI legislation, when we're talking about policy, then it can be misused.
We've seen this happen in the use of Generative AI in conflict settings.
As you said, Melody, there is a lot of geopolitical tensions happening around the world. A lot of armed conflict situations are happening around the world.
One of the things we need to think about, also, is how AI technology can be used.
Another thing that I want to emphasise also is when we speak about AI, AI encompasses a broad range of uses around purposes, so there are more traditional uses in content moderation of social media platforms. They use automated technology to enforce their rules on social media, in taking down things like misinformation or hate speech. You know, so there is that.
But there's also more emerging technology such as Generative AI, like ChatGPT, and many of the issues that involve these technologies Advocate Lufuno already mentioned, and I won't be repeating that here, except to emphasise that there is a divide, in terms of basic infrastructure, on the one hand, inequality.
So one of the things I'm worried about is the hype surrounding AI, again, tends to overlook many infrastructure problems that we still see in Asia and developing countries. We can't really think about AI uptake in these countries until we solve basic issues like access to education and digital education.
And then, on the other hand, we also have existing human rights issues in many countries in the world, including in Asia. We see Internet shutdowns, website shutdowns. So if these things are not even addressed, we can't really talk about AI ‑‑ it's a shiny new thing, but we have not addressed new issues concerning the Internet. So I think that's something I want to highlight in this conversation.
As Advocate Lufuno already mentioned, we need to centre stakeholders but actually in the human rights framework. It's rights. Who are the communities affected by the technology, and who are we leaving out? Who must we include in the conversations?
I will stop here.
>> MELODY MUSONI: Thank you so much for your contribution. You have actually raised something I was talking about earlier, coming here, that now we are just talk about AI, but we are forgetting about the important issues, the foundation of AI, issues on digital infrastructure, for example, accessibility to the Internet and things like that. So AI is coming in and creating further the divide. Now the focus is on AI governess, overlooking some of the existential problems that we already face.
I'm going to come to Gianclaudio to talk more about the developments that are happening in Europe. I'm sure everyone has heard about the EU AI Act, but what exactly is this act about?
Perhaps you can also talk on the Council of Europe and the things happening on developing the framework on AI and making that distinction. We tend to be confused by what is the EU AI Act and what is the Council of Europe Framework on AI Governance.
Over to you, Gian.
>> GIANCLAUDIO MALGIERI: Thank you very much. I hope you hear me well.
Indeed, the AI Act was everywhere in the discussion because all other countries might refer to the AI Act as a model or as a non‑model.
So I think I would like to start, yeah, from maybe the history of the AI Act very briefly. Then we see the mission of the law and the structure. A comparison with the other laws that we have in Europe also regulating AI‑related aspects and maybe, indeed, a focus on the AI Convention and interplay between the Act and the convention.
Now, doing everything in seven minutes is impossible, but we'll try to do some highlights.
The AI Act is the first AI regulation to have in the world. Of course, we had the (?) In many countries in Europe, but the AI act has the (?) To (?) Intelligence as a whole. It took more than three years to the final approval. So from April '21 to August '24. We have different timings for different companies, different kind of entities.
At the same time, I was hearing that, for example, the Asian approach is wait‑and‑see approach. In Europe, we don't have that approach. So I think the main point is sometimes wait‑and‑see is too late. There's this (?) With big tech, et cetera, that it's first to let innovation go, and then question regulate.
Then the European Union took a different position. Sometimes, if we let companies go, et cetera, then the concept might be ‑‑ the consequence might be that harm is already produced, harm on human rights and fundamental rights.
Sometimes this technology creates dependence on society. So Society may depend on these technologies. So it may be the case that it's too big to sanction, too big to fail. This was happening, for example, to the Generative AI system, for example, Open AI, some chat bots.
Looking at the AI Act, I would like to say that there's this difficult balance between risks and rise. In particular, it's this particular development because it was initially considered as a product safety tool, so something more related to consumer protection or product safety, which is also consumer protection.
And, indeed, it was also proposed ‑‑ this is interesting to notice ‑‑ that the European Commission has (?) And this is not proposed by (?) Justice which is usually the part of the European Commission responsible for rights, liberties, et cetera.
It was also interesting. It was developed, really, as a technological safety tool.
And then the European Parliament and also the council during these three years. The Act became something else. It became a fundamental rights tool. So we still see the structure and the DNA of the product safety regulation and then with a lot of fundamental rights in it.
But it's not the tool for individual rights. It's a tool for protective human rights, sure, so this is important. And I think this is one of the main merits of the AI Act because the AI Act takes some choices. Okay? It's not leaving everything to the decisions of companies or the decisions of member states. There are some political choices about risk to fundamental rights that the AI initiative takes. Now we see other laws like the Brazilian Act ‑‑ I don't want to talk about other regions, but it's the first time that we have a list of (?) Practices.
And this brave choice was a political choice. This was part of the most diplomatic section of the negotiations.
So what was the choice in terms of prohibited practices of AI?
For example, social scoring on AI is prohibited. Manipulation or, let's say, exploitation and manipulation of special vulnerable ‑‑
(Speaker video is freezing)
(Captioner has lost Zoom feed)
(Please stand by)
>> GIANCLAUDIO MALGIERI: ‑‑ there's a whole chapter about how AI innovation should be considered. So I think one of the main aspects of the innovation part is there are subjects for SMEs or monumental enterprises, but, also, there are specific protections for the training of AI with exceptions to the data protection tools, and this is for if it's for public policies.
Of course, there are many things I did not say.
The AI Act is not the first piece.
There's regulating several aspects.
Recommended systems, online behavioural ‑‑ there's also things related to the Act, which was 2022.
Digital Markets Act regulated platforms in the digital market.
Then we have AI Liability Directive that's been proposed. As you know, the European Union is a process, and the council ‑‑ in particular, France, are against this liability directive because now there's this wave of we need to have more notification.
Macron was concerned about adding new rules.
It's just the final elements of the AI Act is connected and it's about providing examples and tools for liability when the Act is violated.
But just to say that this is a very, very broad picture.
Then, the Council of Europe this year, actually a few weeks ago, finally approached also the AI Convention. It's interesting to notice that this is not just a regional act. Many other countries that are not part of Europe and not part of the European Union. For example, the United States are a signatory of the AI Convention, but, also, we have other countries. Georgia is a signatory, et cetera.
The AI Act, in terms of the definition of AI, they both take the definition of AI, but it's a different scope. The AI Convention, it just regulates the public sector of AI. But the member states can opt in for private actors regulating the act ‑‑ sorry, providing and deploying the act.
The AI Convention just regulates the whole lifecycle of AI. So it's to the final elements of the conversation of AI. So there's this difference.
I think these are all some of the main, let's say, differences that they have.
Also, the AI Convention doesn't have a list of prohibitions.
Then the question is whether the AI Act can be a European Union of the AI Convention.
It should be implemented by member states, whether the AI Act is law, part of the European system. So it was approved before the AI Convention, it may be a (?) Of the implementation in Europe. This is an open question that they are still working on.
Thank you.
>> MELODY MUSONI: Thank you so much, Gianclaudio, for your contribution.
I guess the points you raise, especially on the roadmap and the discussions leading to the EU AI Act is something that I find relevant, especially in the African context where there is a push to regulate but not taking our time to actually have these negotiations to actually understand what our approach is, and I always give the EU process as an example that it took over three years for you to be where you are with the EU AI Act, and I guess the issues that you also raise on AI that's not just the EU AI Act that's regulating. We have the Digital Services Act and the GDPR, which also regulate. It's also important to understanding our approaches to AI governance and regulation and, of course, the distinctions between the Council of Europe and the AI Act. I think it's also important for me, as someone who is learning more about the EU processes and what is happening with the Council of Europe.
I'm going to change things a bit because one of our speakers was not able to join. Before we go, I wanted to find out from the audience online and here, if you have any questions to our panellists, based on what they already contributed before we move on to the next segment of our discussion.
Online?
>> Melody, if I may, maybe one of the things always ask myself ‑‑ and I know I should know the answer to that, but I don't. It also may be interesting for others. I think this risk‑based approach, Gianclaudio, that you were talking about, risk to what? Because I think in the EU AI Act, it started as a framework, and later on, we infused it with human rights, or we tried, at least.
So the EU AI Act looks at ‑‑ what was the determination around these risk categories? How was it assessed? Maybe you can tell us more about it because Jenny also said that Asian countries are also looking at this kind of tiered approach to risk, and maybe we can hear it from Jenny a little bit more about what the discussions are and how to determine risk and think about these different categories.
So maybe Gianclaudio and Jenny can follow.
And then maybe the AU perspective.
>> GIANCLAUDIO MALGIERI: Already in the GDPR, we had this problem. It was the risk to fundamental rights. What are the risks to the rights? Risks come from business management doctrine, let's say, while fundamental rights from, yeah, human rights and fundamental rights analysis. And they're based on very, very different frameworks, intellectual frameworks.
Risks can be measured and controlled. Fundamental rights are more about the risks that are difficult to conceive and consider as a measurable element. They're usually not measurable. They're politically measurable, if we can say that, but not (?).
So this is a big problem we had in 2016 regarding regulation.
Then there was similar problems for risk assessment. And now the AI Act.
So I would say that the idea is risk to fundamental rights is the short answer, but how can we measure it? How can we analyse it? I think one of the ambiguities of the law is also in Article 1, Article 1 says that the AI Act is to regulate and to protect fundamental rights, health, safety. And it's ‑‑ I'm always a bit skeptical ‑‑ critical when you put safety as separate from fundamental rights. It's only a part of a bigger understanding of fundamental rights.
Democracy is not really a part of fundamental rights chapter. If you look at democracy, you won't find much. We know they're all connected. Right? They're all under the same approach and umbrella.
So there are, of course, political choices to be made. When you go from fundamental rights, we cannot leave everything to just private accountability.
So I think the important thing is that, if me ‑‑ I don't know. Mental integrity means piece. So they (?) Manipulation, just as an example.
We did some research on how question really measure risk to fundamental rights. Now we are working with an NGO, ECNL, to try to understand how to really do that. So we published an article this year to try to understand how the subjective use of marginalised groups, for example, can participate, can inform the discussion of how to measure fundamental rights severity. So the risk severity. And how other elements like diverse effects (?) The violation of laws. We have usually these big discussions in Europe. Some people say that risks of fundamental rights are a violation of law. Violation is black and white, yes and no, and it's difficult to measure.
Other people say we should measure the adverse effect. But to measure the adverse effects, you would need something quantifiable, like property or health. And then you reduce all violation of fundamental rights, like violation of privacy, et cetera, is reduced to what a psychologist (?) For example.
This is a bigger issue than just AI.
>> JENNY DOMINO: Yes. Thank you. So in the Asia Pacific region, what I see, in terms of risk, it's sort of similar to the EU AI Act. There are uses that are enumerated, for example in the AI governance and ethics. There is not a lot of articulation on risks to whom, to answer your question, Sabine. And when it comes to human rights, there's not a lot of mention of it, if at all, and with stakeholder engagement.
Before I delve further into, this I want to make a note about the earlier panellists, about the race to regulate. What's interesting to me about that framing, in this race, who is competing against whom, and who is left out?
If we're thinking about the Global South, right, developing countries, I agree that we need to hold tech companies accountable. At the same time, we also need to think about the geopolitical tensions, right? How are we looking at all these competing regulatory initiatives? And who is left out from this race? Right? Why is it even framed as a race, right? A race means there's going to be a winner, in terms of the ‑‑ or a leader, in terms of regulation.
But we have to be thinking about this from a global perspective was the Internet is global. There will be overlaps. We don't want fragmentation. And my worry is if we don't even have a unifying framework and if we're thinking about this as a race to regulate, then there will be people at the bottom and people on top.
So I just want to push back a little bit on the framing because it's interesting to me, as somebody coming from the Philippines.
And so back to the human rights discussion, I think what is interesting about corporate accountability, I do agree that we need to hold corporations accountable, tech companies accountable, but I see the discussion here as sort of mirroring the history of the European guiding principles on ‑‑ and human rights. That's the whole reason the (?) Reformed. Companies, decades ago, were operating in developing countries, and what do you do when the human rights violators sometimes are government actors? Right? And that's why I really do believe that government regulation is warranted. They provide the baseline. But over and beyond that, there are many, many issues that we want governments to regulate under human rights law.
(Audio is distorted)
>> JENNY DOMINO: I'm using what I'm most familiar with. For example, Article 19 on the International Covenant on Civil and Political Rights guarantees freedom of expression of persons. Under Article 19 of the ICCPR, there are areas where you can limit freedom of expression, right? But this whole treaty, this Article 19, was designed to hold state actors accountable. So all treaties are state based, right? We need to guarantee fundamental rights of persons against their government.
And so there are exceptions, right? There are so many issues that we call lawful but awful. So these are like (?) False content. There are so many human rights group that criticize fake news laws several years ago that were coming out in different countries around the world. Right? Or hate speech, I know certain countries, there are hate speech laws. In other countries, there are no hate speech laws. Regardless, right?
So what I'm saying is there are issues. That's what I was talking about earlier when I say AI is all‑encompassing. It affects so many different rights, so many human rights.
So when we're thinking about what do we want our governments to regulate? What do we want corporate actors to do above and beyond regulation. Right?
Under human rights law, there are so many areas of human rights that we wouldn't want governments to regulate. One example of that is speech, right?
How do you regulate Gen AI companies?
We want them to go beyond what regulation requires. I see this as complementary. That's how I see it doing its job. It's not meant to replace government regulation, as it should be.
At the same time, there are many areas concerning countries around the world where we need companies to step up because we can't just rely on governments to do their job. So I think that's what I really want to nuance discussion a little bit about regulation.
>> MELODY MUSONI: I forget, Lufuno, do you want to check in?
>> LUFUNO TSHIKALANGE: In terms of the risk identification from our advantage toward the approach of the African (?) In the development of the Artificial Intelligence Strategy and how even we are doing our own policy in South Africa. It's more human‑centric. Through the continental strategy, you see there's a lot of references to human‑centric development and respect and protection of human rights, which is also one of the eight principles to our strategy.
So I believe this approach was because the strategy is based on the human rights charter and a number of constitution around the African region.
So the risk is to the people. If we do not have ethical behaviour and we have technology, people for technology instead of technology for the people, meaning technology is a tool of advancing our lives and helping us to move out of poverty and inequality and unemployment, and we are now being made a commodity, that is what the strategy was by always trying to avoid, that the technology is here to enhance not necessarily to abuse anything or take advantage of anything.
So rights to privacy, environmental rights, issues of climate change, that's why it's identified because they impact on human rights at an individual level. Though there are other issues that were related to economy, the important part was that the risk will be to the consumers of the technologies, and they need to protect it from any risk or harm that may come out of it.
So I believe that was the approach to make sure that this strategy is human rights‑centric and whatever comes out of the technology is advancing the human rights, not necessarily violating the same.
So I do hope that makes sense and also the agenda 2063 also says that our development must be human‑centric, so everything that we have, from a policy perspective, it is putting human rights at the centre of everything.
Thank you.
I think in a lot of tech companies, there's a compliance team that's different from the human rights team, and the team is thinking about business risks.
So we have to map what are the risks to the business of having an adverse impact on human rights. Is there an accountability mechanism? Will thereby regulatory risk? Is there some consequence? Is there going to be a requirement in legislation or in regulations to provide a remedy to victims of adverse human rights impacts? Because without that, I think there will be a nice human rights department who might produce human rights impact assessments, but there's not any real consequence, and unless there's a real consequence to the business and it's a business risk, I don't think we're going to see a lot of change.
>> Thank you for the information about the different jurisdictions.
I would like to draw the attention to some technical requirements. If we look at the technical realities, we currently see around 140 large language models on the globe. More than 120 of them are coming from the U.S. Then you have 20 from China. And the rest is shared between a tiny number of jurisdictions.
And practice, and beyond any regulatory approaches, that means that a huge number of countries, including the European Union is dependent on large language models, from the U.S. and from China. If we speak about fundamental rights and how to gain access to this technology and fundamental rights while using AI, I think one important aspect is digital sovereignty. So what extent are countries in a position to train AI models as the basis for every AI system themselves? And the fact is that even in Europe, we have not the data centre capacities and computing power capacities to train European models, which means we're dependent on other large language models from other regions.
In light of the access to Iowa in developing countries ‑‑ for them, it may be even more challenging to build out the data centre and computing power infrastructure to be sovereign and to train AI models themselves.
I think that's an important factor for the discussion. A unified AI regulatory framework would be an important aspect, but it won't work if we do not have, well, the same conditions for training AI models in our regions over the globe.
Thank you.
>> MELODY MUSONI: Questions?
>> SABINE WITTING: I have so a question. How do we ensure that the same ‑‑ sustain quality provisions across all conditions in line with human rights protocols?
I guess it goes to accountability across borders.
Jenny, you spoke about that. Maybe anything you want to add to that?
>> JENNY DOMINO: Yes, of course, thank you.
Maybe I will quickly comment on all the questions and comments.
So, on remedy, I completely agree. I think I'm afraid my answer is more of a question, which is: What would constitute sufficient remedy? If there's an adverse human rights impact, what would be remedy in this regard?
The principles were constructed at a time when they were contemplating a different kind of industry, not a tech industry, not a cross‑border tech company that may not be in the regulatory jurisdiction of the country of concern.
I guess that's just an addition. What would be a good remedy?
Again, with respect to platform governance, in Myanmar, regarding inciting violence, there were groups in Bangladesh, in camps, that said help us because your platform was partly used to do this, to cause this, what is happening to us.
And Facebook said no, saying they're not a charity organisation. Right? And I think that raises, at the very least, interesting questions about what would constitute remedy. From a legal point, how do you attribute causation? Again, we're talk about liability, and that's a different framework.
I think all of this is really interesting, as a matter of law. I don't have easy answers for that as well, but just to complicate the discussion further.
On the second comment, I completely agree. What I want to add there is the training ‑‑ the labeling ‑‑ the labour aspect of this. I think this is something that we have not discussed yet. Right? How are developing countries involved in the training of this data, in the labeling? Right
>> Because cheap labour, again, it's in the developing countries. Again, we see this in content moderation. We see this in AI as well. And the question there is: What are these governments in developing countries do that, to regulate labour, when there are not labour protections in many parts of the world?
I forgot about the question. Quality assurance, right?
Yeah, so I think that's why it's still very relevant to try to articulate more detailed guidance from the UN Guiding Principles on Human Rights. I know this has been led by the initiative. There are no leading answers, but I know that there's group at the UN and Civil Society groups are working to ensure quality and create guidance.
>> MELODY MUSONI: Thank you so much, Jenny.
I guess, for the sake of time, we need to go to the second round of questions. I see there are still more questions online. We'll try to attend to them at the end of our session.
So the next question that I'm going to ask, I guess I will direct my question to Gianclaudio. One of the policy questions that have been coming up, especially within the African context, after the EU adopted the EU AI Act was the externalisation of the framework. I guess it's coming from a point where the GDPR was adopted. Most African countries felt that they were being pushed to adopt a similar framework within their regions so as to comply with the EU and to be able to do business with the EU.
I was wondering, in the context of the EU AI Act and the provisions we have, does have it the same affect similar to the GDPR? And do you see many countries, for example, emulating the EU AI Act and adopting the frameworks in their own national legislative processes, or will you see it different because it's different from the GDPR?
>> GIANCLAUDIO MALGIERI: Yeah, sure. Great question. And I think it's connected to some other points and questions that were raised in this round of questioning. So it also gives me an opportunity to connect them.
So I think there are two separate aspects. One is the real scope, like the potential (?) Scope, in particular the EU AI Act, and the potential Brussels effect. So how the Brussels effect that we have for the GDPR, so the emulation by other countries and systems might happen also for the AI Act. So I think these are two separate questions but connected and also connected to some of the elements or questions that were raised before, for example, about how can we have a normal (?) Of AI if then we have countries that cannot have data centres or have (?) To AI.
This connects because if just regulate Generative AI and it's developed mostly in the U.S., we need to have some rules for the application. Let's say extra territorial (?) Of the law.
If commercialized in Europe but produced somewhere else, then you have to follow the rules of the AI Act.
The GDPR went even broader because, for the GDPR, even if you just monitor behaviors of people in European Union or if you offer services for free to people who are in the European Union, the GDPR was applicable.
And I think the two parts are connected, right? The more you have in the law, the more you're pushing other countries to add similar systems in order to be adequate. Adequate is a term (?) By the European Commission to analyse the compatibility of other legal systems to the European data protection system.
So just to say I think that the scope ‑‑ the particular scope of the AI Act tries to go beyond what is in the (?) But of course, this is not easy.
About the Brussels effect, this is a political analysis. So what do we expect? Do we expect other countries will follow the AI Act, as it happened with the GDPR? And this is also connected by a comment about a race between the first and the last and what do we do.
So I think, of course, it's not a race to be the best country, the best region. I think this is very important, that there's a big risk of legal colonialism that Europe can have. Of course this is a risk, and this is not something that we want.
I don't think we should push for Europe to dictate the legal agenda of other countries. This is not what we want. At the same time, if these laws are prohuman rights and profundamental rights, then it's also, I think, a good thing, if the Brussels effect takes effect.
So if other countries copy European Union, prohibiting exploitation of migrants (?) A full approach to bias and discrimination online and oversight, then it's good if other countries maybe copy the European Union AI Act.
We see it happening. I was mentioning Brazilian AI Act. It was approved by the first chamber two weeks ago, and it really reflects the European Union AI Act, in the design and governance and prohibitions, it reflects a lot. And it reminds how the GDPR ‑‑ the law in Brazil reflects the GDPR.
I'm just saying this is possible, but there's a big caveat, a big difference. It's difficult to export the AI Act from one to (?). The AI Act is highly rooted on political considerations that cannot be easily exported because, for example, even the prohibition of (?) Was written having in mind examples written from China, et cetera. So there are political considerations on what human rights is. And the same fundamental human rights is not applicable all over the world.
So what GDPR could be exported? I'm using the word "exported," but just say the (?) That we have, could that be reproduced in other regions? It was based on some principles that are not so difficult to reproduce. For example, technical principles. Right? The AI Act is rooted in fundamental rights, in a way, and fundamental rights are not (?) Just applies to the European Union.
So this is maybe one of the major obstacles of the Brussels effect.
But we can wait and see. When we were preparing this panel, I had interacted with Brazilian Act, and it's similar to the AI Act.
So even though (?) Principles is difficult in practice, but the Brussels effect, I see, may already be taking effect.
>> MELODY MUSONI: Thank you for that, Gianclaudio. Earlier, we were talking about the geopolitics and how it also kind of shapes the approaches we adopt when it comes to AI governance, for example.
I think what you mentioned about ‑‑ especially with the social scoring and the position that the EU AI Act is just one example.
I'm going to go to Advocate Lufuno again, looking at the whole geopolitics and how it's actually shaping our approaches to AI governance.
You are sitting in South Africa, and South Africa has taken over the (?) Presidency, and they're quite high in expectations for the continent. A lot are seeing it as an opportunity for Africa and South Africa to promote and to advocate for inclusive digital ‑‑
(Captioner lost room audio)
>> MELODY MUSONI: Can you hear me now?
>> LUFUNO TSHIKALANGE: Yes, can I hear you. G20 promoting Africa and ‑‑
>> MELODY MUSONI: Yes. So I was saying that there are big expectations that with now South Africa taking over the G8 presidency and the AU being a member, we should see more and more conversations and discussions around digital development and AI governance.
Also a member of the (?) Economic blog who are also shaping the Iowa policies.
What I wanted to ask you is do you think Global South countries are able to develop maybe an alternative to what we have in the EU AI Act. Do you see Global South countries developing an alternative approach to AI governance? If so, what do you think that would actually look like?
>> LUFUNO TSHIKALANGE: Thank you. Yes. We added the (?) For the G8 2025 ‑‑ the theme is sustainability and equality and (?) In that theme. I believe that we need to take an advantage of the G20, which I am personally already doing that, looking at the issue of proposing the establishment of the diplomacy for the G20 and, obviously, to ensure that we assisted ‑‑ in a sense, I said that collaboration is very important. So this G20 will help us to ensure that we collaborate better and find ourselves in terms of the principles that we want to see that are informing the conversation of AI governance. I don't necessarily see us cutting and pasting the EU AI Act, but I believe that the EU has really set an example that we can study and see what lessons can be learned moving forward.
And I believe, if I'm not mistaken, the EU Commission is also part of the G20 alongside the African Union. So I am seeing an opportunity that we are not competing as to who can do better than who but bringing our resources together to make sure the world becomes a better place for everyone.
And I believe if we form appropriate collaborative relationships during our presidency, that we can then have opportunities to have skills, exchange programmes where some of the African talent can be sent to the Global North to lend the skills (?) I believe that there are a lot of opportunities that can be derived out of this. We're not doing well in our cybersecurity space. Our cybersecurity post (?) Is Africa is not something to bring about. Some of the strong countries within the G20 are doing well in that space. So I am seeing that if we realise this opportunity, there's a lot of advantages from where I am seated, and the expectations are big for sure because we will also bring a panel to the South African Science Forum, which happened on the first week of December. There are a lot of expectations that the president set out and the Department of Science and Technology sent out regarding the presidency.
I don't know if I'm answering your questions, but a lot of challenges that we have, if we can muster the collaboration approach, I believe this one year of presidency will (?).
Thank you.
>> MELODY MUSONI: Okay. Thank you, Advocate.
I'm going to come to you, Jenny. I think you already touched on this in a way.
(Overlapping speakers)
(No discernible speaker)
>> MELODY MUSONI: I'm going to come to Jenny and back to the discussion on AI governance. You were mentioning about why are we competing and if we are competing, there will be winners and losers.
Technical team, can you help us with the audio for the online participants?
The way I see it, I'm moving from law and moving more into policy and public policy. I see a lot of influence that is coming from different political actors on how we approach AI governance. So I just giving the example the Gianclaudio gave with the EU AI Act and it's considered high risk. It's definitely not allowed, in terms of regulation in Europe, but we see areas in China where they still have social (?) And I think those political standpoints are going to shape things.
One of the declarations and commitments that China made is we need to support each other on discussions on AI governance. For example, on global AI governance discussions. And, already, that is a discussion that we are going to a place where we are not going to have one uniform‑regulated framework on uniform governance.
My question is how do you think we should be able to achieve a more universal approach to AI governance or AI frameworks? And where do you actually see us moving from here? What should we start prioritising? What kind of discussions do we need to start having where we are able to say, This is our baseline. These are our minimum standards we expect to see in different frameworks, whether it's being adopted by the African Union, it's being adopted in Latin America.
What kind of conversations should we start having? And what kind of principles should we start to see regarding AI governance?
>> JENNY DOMINO: I feel like if I know the answer to this, we could all go home, and there won't be an Internet Governance Forum next year because we won't have a need for it.
So my short answer to this question is I think human rights really provides a common language. The UN special rapporteur has described human rights as offering a universal language that everybody can understand regardless of where you are in the world, regardless of the political situation in that country, the people in those country understand our rights framework. And that's why I think human rights groups have been criticizing the more risk‑oriented approach to AI governance, as opposed to the right‑centric framework.
I know that answer can also be seen, or perceived, by some as naive or too idealistic, but I actually think it is pragmatic because, first of all, it's something that is understandable to everyone. It's something that civil society and underrepresented regions can use that kind of language when talking about the risks, the harms posed by AI technology.
So I think that's something that we should aim for to bring back human rights front and centre in AI governance.
>> MELODY MUSONI: Okay. So thank you so much. We need to prioritise human rights protections and our frameworks on AI governance.
I see we have four minutes left. I wonder do we have questions from the audience online and here?
>> Thank you for the wonderful thought‑provoking conversation. I wanted to ask ‑‑ I only attended half of the session, so if this is repeated, you don't need to answer it.
I want to understand when the Global South regulates or writes policies for the Internet and digital technologies, they are writing policies to govern companies and organisations that exist outside of their jurisdiction, most of the time, and what we see in the EU and generally in China and the U.S., that they are writing policies and regulations for their own leverage points and their own companies and their own markets.
So when the Global South comes and writes their own policies, even when it comes to Africa, it has a huge population, a young population, and they're consuming a lot of these technologies. They don't have as much leverage points, and perhaps the IGF is a great space to prioritise some of the things that the Global South can add.
So what are these advantage points that the Global South needs to capitalise on and bring to the conversation where we can contribute to the conversation with the EU experts? I think one of the things that really makes the EU a stronger market is because they have a lot of experts who are very capable, in terms of collaboration and in terms of having the organisations and the funding that enables them to make huge leap frogs, in terms of knowing what can serve the political organisation and also civil society organisations and so on.
So what do we need to do in the Global South? What do we need to leverage, and what should we prioritise, and what should be the things that we focus on in order to help us have a productive conversation with the Global North counterparts?
Thanks.
>> MELODY MUSONI: May I take this one? So you raise an important question. I think it's one of the questions that we always raise when are talking about regulation of AI, especially in Africa, to say what are we regulating and do we even have the institutional capacity to go after big tech? And in some of the conversations we have, it's that they don't have any legal presence in a lot of countries. For example, Meta, you have offices in South Africa, Kenya, and Nigeria. So if these unlawful processing of data in another country that doesn't have data protection laws, they are not protected.
And just to give you an example on data protection, one approach was perhaps it makes sense, from a regional perspective, if we can collaborate as a continent and have one regulatory board that represents the interest of everyone else.
I remember in 2017 with the Cambridge Analytica crisis, they were able to take action against Meta and Facebook at the time because they had a regulator.
I think there were discussions with Kenya as well. So learning from that example to say for countries that already have this institutional capacity ‑‑
(Captioner lost room audio)
>> MELODY MUSONI: ‑‑ yes, in the moment, with AI, it's very difficult. We always have these conversations, and my position is rather let's start with the low‑hanging fruit, the data protection laws that we have and trying to see what are the laws we can kind of extend, for example, from a criminal law perspective. We are talking about misinformation, hate speech. We already have laws, to some extent, that have issues where AI is being used for misinformation.
So extend the law.
So there are different ways of regulating AI without particularly having a specific law because, at the moment, we definitely don't have that capacity and the framework to go after the big tech if they are not in our countries.
I don't know. Jenny, Advocate Lufuno, do you have comments?
I will start with Jenny and then Lufuno.
Can you hear us, Lufuno?
>> LUFUNO TSHIKALANGE: Yes, I can hear you. I thought you said Jenny was going first.
Thank you.
Yeah, I believe from the Global South perspective ‑‑
>> MELODY MUSONI: Just to remind you, one minute. Wrap up in one minute.
>> LUFUNO TSHIKALANGE: From the Global South perspective, I believe that we do have enough skills and case studies that we can collaboratively use to combine our resources and come out of the consumer status that we have been in for so long. Like, we need to work towards coming to the table not as subject matter of discussion but as a contributor.
Thank you.
>> MELODY MUSONI: Okay. Thank you so much.
I see we have run out of time. I would like to take this opportunity to thank our speakers, both online and here was. Thank you, audience, for participating in our discussion. Feel free to come and engage with the speakers who are here if you have additional questions.
Let's give a round of applause to our brilliant speakers.
Thank you.