Session
Organizer 1: Paola Galvez, IdonIA Lab
Organizer 2: Ananda Gautam, Youth IGF Nepal
Organizer 3: Matilda Mashauri, University of Dar es salaam
Organizer 4: Aaron Promise Mbah, Tlit Innovation Lab
Speaker 1: Paola Galvez, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Yonah Welker, Civil Society, Eastern European Group
Speaker 3: Abeer Alsumait, Government, Asia-Pacific Group
Speaker 4: Lopez Monica, Private Sector, Western European and Others Group (WEOG)
Ananda Gautam, Civil Society, Asia-Pacific Group
Matilda Mashauri, Government, African Group
Aaron Promise Mbah, Private Sector, African Group
Roundtable
Duration (minutes): 60
Format description: A round table format encourages active participation and dialogue among speakers and participants. The setting fosters a collaborative exchange of ideas and 90 minutes is enough time to cover key topics and engage in substantive discussion, while also allowing for flexibility to adapt to the flow of conversation and address emerging issues as they arise. It allows for a balance between the panel discussion which will take 50 minutes and an interactive 40 minutes Q&A session.
1. How can we ensure algorithmic decision-making processes are transparent and accountable, particularly in relation to their impact on marginalized communities? 2. What measures can be implemented to foster the development and deployment of disability-centered algorithms that prioritize accessibility and inclusion for persons with disabilities? 3. In what ways can stakeholders collaborate to address biases and discrimination embedded within algorithms, while promoting diversity and equity in digital spaces?
What will participants gain from attending this session? Participants will gain a deep understanding of how algorithms impact human rights and inclusion in the digital age, learning about the ways in which algorithmic decision-making processes can perpetuate social exclusion, discrimination, and inequalities. They will become aware of the risks associated with algorithmic bias and exclusion and the importance of addressing these issues to uphold human rights and create an equitable digital environment. Participants will learn best practices for promoting algorithmic transparency, accountability, and inclusivity, as well as approaches for designing algorithms that prioritize human rights and equity. Additionally, they will have networking opportunities to connect with colleagues who share an interest in advancing human rights and inclusion in the digital age. Ultimately, participants will come away with actionable recommendations for promoting digital inclusion and addressing algorithmic bias in policy development, industry practices, and civil society initiatives.
Description:
The session addresses the problems of algorithmic bias and exclusion impacting human rights in the digital age. As algorithms increasingly play a pivotal role in shaping various aspects of our lives, from employment opportunities to access to information, there is growing concern that these algorithms may perpetuate and exacerbate social inequalities and discrimination. At the heart of this issue lies the concept of algorithmic bias, which refers to the systematic and unfair treatment of certain groups or individuals based on race, gender, socioeconomic status, disability, or other protected characteristics. Algorithmic bias can manifest in various forms, including unequal access to healthcare or financial services, disparities in search engine results, and discriminatory targeting in advertising, among others. In this context, the session recognizes the importance of a human-centered approach to algorithmic development and deployment, one that prioritizes human rights, equity, and inclusion. The session aims to state that by placing human rights at the centre of algorithmic design and implementation, it is possible to mitigate the risks of bias and discrimination and ensure that algorithms serve the needs of all individuals, regardless of their background or circumstances. Many algorithms operate as "black boxes" with their inner workings hidden from scrutiny, making it difficult to identify and rectify instances of bias or exclusion. Without transparency, individuals affected by algorithmic decisions may be left without recourse or understanding of why they are treated unfairly. Hence, drawing from the speakers' extensive experience, the session explores concrete strategies and best practices for advancing human rights and inclusion in the digital age through algorithmic transparency, accountability, and inclusivity. The session fosters dialogue among stakeholders from diverse backgrounds and generates actionable recommendations for advancing human-centered algorithms and promoting digital equity and inclusion.
1. Increased awareness among stakeholders about the ethical and social implications of algorithmic decision-making. 2. Identification of concrete strategies and best practices for promoting human rights and inclustion through algorithmic transparency and accountability. 3. Development of actionable recommendations for designing and implementing disability-centered algorithms to enhance digital accessibility and inclusion. 4. Establishment of a network of stakeholders committed to advancing human-centered algorithms and promoting digital equity and inclusion. 5. Creation of a roadmap for ongoing collaboration and dialogue to address emerging challenges and opportunities in the field of algorithmic governance.
Hybrid Format: To ensure seamless interaction between onsite and online participants, we will leverage a combination of technology and facilitation techniques. The onsite moderator will ensure that onsite and online participants have equal opportunities to participate and ask questions. We will incorporate live polling facilitated by our online moderator to encourage active participation from both onsite and online participants. Online attendees can submit questions and participate in polls in real time, while onsite participants can also engage with these interactive elements using their mobile devices. Leveraging the active engagement of our speakers in social media (i.e. Yonah Welker has 28K followers on LinkedIn), we plan to create a session-specific hashtag such as #AI4HRATIGF with which participants can share insights and connect with each other before, during, and after the session.
Report
AI systems are reflecting and amplifying societal biases, further deepening existing inequalities.
There are various approaches to AI governance—such as those centered on human rights, risk, principles, outcomes, or values—but these are not mutually exclusive. Governments should assess their local needs and priorities to determine the best combination of approaches for fostering a robust and sustainable AI ecosystem.
AI systems, despite their potential to enhance accessibility, face challenges such as heightened risks of errors or inaccuracies affecting individuals with impairments.
To address inequalities produced by algorithms, we must take a holistic approach—improving the quality and diversity of the data used to train AI, fostering inclusive and representative development teams, and ensuring equitable infrastructure to support these systems.
An algorithmic inclusion requires capacity building and diverse actors in the whole cycle of an AI system. Promoting algorithmic inclusion starts with educating users to recognize AI's impact and empowering them to respond critically. Additionally, diverse teams and rigorous audits are essential to create equitable algorithms that address systemic disparities.
Ensuring Human Right and Inclusion: An Algorithm Strategy
Introduction
Algorithms, the invisible architects of modern society, influence every aspect of our lives, from social media to marketing and decision-making systems. While AI holds great promise, its implementation often reflects and amplifies societal biases, deepening existing inequalities. This workshop explored the critical need for algorithmic strategies that prioritize human rights, inclusion, and equitable development. Participants discussed how to address the systemic challenges posed by biased algorithms and identified actionable pathways to ensure technology serves as a tool for inclusion.
Key Takeaways
1. Algorithms Reflect and Amplify Societal Biases
Algorithms are not neutral processes: they mirror societal values and biases, exacerbating systemic inequalities in critical areas such as education, employment, and accessibility.
2. Diverse Approaches to AI Governance
Governance frameworks focused on human rights, risks, principles, outcomes, or values are all essential. Governments need to assess their local priorities and adopt a mix of strategies to foster robust and sustainable AI ecosystems.
3. Assistive Technologies for Persons with Disabilities Require Vigilance
AI-powered assistive technologies can enhance accessibility but also pose risks of errors and exclusion. This underscores the importance of inclusive design and rigorous testing.
Recommendations/Call to Action
1. Adopt Diversity and Inclusion at every stage of AI lifecycle
- Foster inclusive and representative teams that integrate diverse perspectives into algorithm design.
- Ensure the data used in training AI systems reflects the diversity of society to avoid amplifying existing biases.
- Establish rigorous audit mechanisms to ensure fairness, accountability, and transparency in AI systems.
2. Advance Algorithmic Literacy and Public Engagement
- Educate users about the societal impacts of AI, equipping them with the knowledge to engage critically with algorithm-driven systems.
- Promote public participation and ownership in AI policy discussions, ensuring that governance reflects the voices of marginalized communities.
- Support youth and civil society organizations in driving awareness campaigns on algorithmic inclusion.
3. Strengthen Legal and Regulatory Frameworks for Accountability
- Develop comprehensive laws and policies that mandate algorithmic accountability, with clear standards and penalties for non-compliance.
- Invest in infrastructure to test, evaluate, and refine assistive AI technologies, ensuring they maintain their reliability and inclusivity.
- Create international collaborations to share best practices and domesticate global AI principles into local actions.
Conclusion
The workshop emphasized that while efforts across governments, civil society, and the private sector are underway, there is still an urgent need to integrate human rights and inclusion into the development and governance of AI systems. Algorithms, while powerful tools, must be designed and deployed with a focus on equity, diversity, and justice. Addressing algorithmic biases is a shared responsibility that requires collaboration among governments, civil society, private sector actors, and individuals. By adopting holistic approaches, fostering capacity building, and engaging diverse voices, we can ensure that AI serves as a force for inclusion and empowerment in society.
References
Disability-centered AI:
https://yonah.org/un_igf_ability-centered_ai_policy.pdf
Repositories of assistive technologies (contributed), over 120 technologie OECD - https://www.oecd.org/social/using-ai-to-support-people-with-disability-in-thelabour-market-008b32b7-en.htm
Programs and MOOCs - (author) OECD MOOC (author) - https://oecd.ai/en/catalogue/tools/disability-centered-ai-andethics-mooc
Human-Centered AI and ethics (funded by the EU Commission) - https://learn.shop4cf.eu/courses/course-v1:SHOP4CF+S4CF01+2021/about
Publications and letters, policy suggestions:
https://oecd.ai/en/wonk/eu-ai-act-disabilities (AI Act)
https://oecd.ai/en/wonk/disabilities-designated-groups-digital-services-marketacts (DSA Act)
https://www.weforum.org/agenda/2023/08/sovereign-funds-future-assistivetechnology-disability-ai/
https://www.weforum.org/agenda/2023/11/generative-ai-holds-potential-disabilities/