IGF 2023 Town Hall #87 Chatgpt Bias: Addressing Arabic And Gender Inequities

    Issue(s)

    Chat GPT, Generative AI, and Machine Learning

    Birds of a Feather - 60 Min

    Description

    Proposal:
    This proposal seeks to present a session at the Internet Global Forum that sheds light on the biases present in language models, specifically focusing on language bias, the misogynistic bias and its implications for violence against vulnerable communities, including women and refugees. The session will dive into the underlying causes of bias in AI language models like ChatGPT and discuss strategies to mitigate and combat these biases. Through an open and inclusive dialogue, the session aims to raise awareness, foster understanding, and encourage collective action to ensure AI technologies promote equality, inclusivity, and respect for all.

    Objective: This session aims to:
    Expose biases in ChatGPT: Uncover and analyze the misogynistic bias embedded within ChatGPT and its potential impact on perpetuating violence against vulnerable communities.
    Educate on the consequences: Provide insights into the societal implications of biased AI language models, specifically focusing on the ways in which they can reinforce and amplify discrimination, violence, and marginalization.
    Foster collaboration: Encourage collaboration between AI researchers, developers, policymakers, civil society organizations, and affected communities to develop strategies that mitigate biases and promote ethical, fair, and inclusive AI technologies.
    Propose solutions: Generate actionable recommendations and best practices for the design, development, and deployment of AI language models to address biases and promote equality, respect, and safety.
    Expected Outcomes:
    Increased awareness and understanding of biases in AI language models, with a specific focus on misogynistic bias and violence against vulnerable communities.
    Identification of strategies and best practices for bias mitigation and ethical AI development.
    Enhanced collaboration between AI researchers, policymakers, civil society organizations, and affected communities to address biases and promote inclusive AI technologies.
    Actionable recommendations for policymakers, industry leaders, and AI developers to ensure the responsible and ethical deployment of AI language models.
    Empowerment of affected communities to actively participate in shaping AI policies, practices, and frameworks that foster equality, respect, and safety.
    Bringing attention to the importance of backing the expansion of AI-driven initiatives that address discrimination and bias.


    The session will be in person, however it can be live-streamed. We can adjust the content prior to the session, making it light and friendly for the online attendees.

    Organizers

    Social Media exchange
    Najah Itani, SMEX, civil society covering WANA.
    Zeinab Ismail, SMEX, civil society covering WANA.

    Speakers

    Najah Itani, SMEX, civil society covering WANA.
    Zeinab Ismail, SMEX, civil society covering WANA.

    Onsite Moderator

    Zeinab Ismail, SMEX, civil society covering WANA.

    Online Moderator

    Session will be in person

    Rapporteur

    Najah Itani, SMEX, civil society covering WANA.

    SDGs

    4. Quality Education
    4.6
    10. Reduced Inequalities
    10.2
    10.a
    16. Peace, Justice and Strong Institutions
    16.a
    16.b
    17.13
    17.14
    17.15
    17.6
    17.7
    17.8
    17.9


    Targets: The proposal is aligned with the following SDG targets:

    SDG 4 (Quality Education): By addressing biases in language models and promoting awareness, the proposal contributes to providing inclusive and quality education opportunities. It emphasizes the importance of understanding the implications of biases in AI technologies, fostering knowledge, and promoting equal access to information.

    SDG 10 (Reduced Inequalities): The proposal directly relates to reducing inequalities by focusing on biases that disproportionately affect vulnerable communities, including women and refugees. By discussing strategies to mitigate biases, the session aims to promote equality and inclusivity, reducing the disparities caused by language and misogynistic biases.

    SDG 16 (Peace, Justice, and Strong Institutions): The proposal highlights the implications of biases in language models for violence against vulnerable communities. By addressing these biases and fostering understanding through an inclusive dialogue, the session promotes justice, peace, and strong institutions by advocating for fairness, respect, and equality in AI technologies.

    SDG 17 (Partnerships for the Goals): The proposal aligns with SDG 17 by emphasizing the need for collective action. It encourages collaboration among stakeholders at the Internet Global Forum to mitigate biases in language models and ensure that AI technologies promote human rights, equality, and inclusivity. It recognizes the importance of partnerships and cooperation to address the challenges associated with biases in AI technologies.