Session
Organizer 1: Cecilia Alemany, Development Alternatives with Women for a New Era
Organizer 2: Linnet Taylor, Data Justice Project, Tilburg Law School, Tilburg Institute for Law, Technology and Society
Speaker 1: Sofia Scasserra, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Maya Ganesh, Technical Community, Asia-Pacific Group
Speaker 3: Anita Gurumurthy, Civil Society, Asia-Pacific Group
Speaker 4: Linnet Taylor, Civil Society, Western European and Others Group (WEOG)
Cecilia Alemany
Anita Gurumurthy
Cecilia Alemany
Break-out Group Discussions - 90 Min
The first 20 minutes will be used for 4 lightning talks from a panel of speakers which will set the stage for the workshop. The specific views/perspectives that the speakers will bring are explained below:
- Sofía B. Scasserra, Asistente Principal en Cuestiones Económicas y de Comercio Internacional
Secretaría de Asuntos Internacionales, FAECyS (World Labour Institute): Sofia will reflect on how recent developments in AI technologies (eg. Smart Home technologies, natural language processing, digital assistants etc.) end up reinforcing prevailing structures of gender power.
- Maya Ganesh, Center for Digital Culture, Leuphana University, Lüneburg, Germany. Maya will reflect on how the development of standards and guidelines for the technical community needs to address gender bias in AI technologies. She will bring reflections from her research on algorithmic discrimination and her engagement with the development of IEEE’s proposed standard on addressing cultural bias in algorithms P-7003.
- Anita Gurumurthy,Digital Justice project of Development Alternatives with Women for a New Era and IT for Change. Anita will reflect on new issues/challenges on putting in place accountability arrangements to prevent AI from amplyfing/reinforcing gendered exclusion, bringing in insights from research undertaken as part of the Digital Justice project.
- Linnet Taylor, Data Justice Project, Tilburg Law School, Tilburg Institute for Law, Technology and Society. Linnet will reflect on how techno-material and policy architecture choices need to be in sync with one another if the gender equality agenda is to be realised in times of automated decisionmaking. She will bring in insights from her work in the area of ethics and data technologies.
The primary organizer, Development Alternatives with Women for a New Era, is submitting a IGF workshop proposal for the first time. DAWN has been extensively engaged in research and advocacy around economic and gender justice and sustainable and democratic development and seeks to bring in these perspectives into its engagement with digital rights and technology governance debates at the UNIGF.
The moderator and speakers hail from different geographies and there is representation from Latin America, Asia Pacific and Europe. We have also ensured diversity in stakeholder representation when bringing together speakers. Sofia brings a trade union and labour movement perspective; Maya brings a technical community perspective; Linnet is from academia; Anita brings in a practitioner standpoint at the intersections of gender and development and digital rights; and Cecilia brings in the perspectives from the Southern feminist movement.
The session will follow the 90-minute break-out discussion format. Cecilia Alemany from the DAWN network will be Session Facilitator.
a. 0-20 minutes:
The first 20 minutes will be used for 5-minute lightning talks from a panel of 4 speakers that will set the stage for the workshop. (Please see question IX for details of the specific views/expertise/perspectives that the four speakers will bring).
b. 20-65 minutes:
At the end of the lightning-talks, the moderator will divide the participants into two/three break-out groups depending upon the strength of attendance. Each break-out group will discuss the three policy questions named in QVII above, nominating its own moderator and rapporteur. Groups will have 45 minutes for the discussion, during the course of which they will maintain digital notes and also create a presentation of their key points for report-back. We will also provide an option for a break-out group comprising remote participants with an online moderator who will take the responsibility of collating key points and preparing notes for reporting back to the larger group.
c. 65-85 minutes
These 20 minutes will be used for report-back of key insights from the break-out discussion by each group, including the remote participant group, and a Q & A round.
d. 85-90 minutes
The Session Facilitator will sum up and capture key highlights of the workshop with her closing remarks and comments.
As explained in Q VIII, the first 20 minutes of the workshop are reserved for interventions from speakers. The remaining 70 minutes are for group-work and report-back, keeping in line with the format that is selected – of break-out discussion. Refer Q VIII for details of how exactly we plan to ensure rich and vibrant audience participation.
Mainstream debates on gender and AI systems have pointed, inter alia, to:
- how the cultural biases of developers and ordinary Internet users normalise sexist (and racist) overtones in interactions with autonomous chatbots and digital assistants.
- how algorithmic decision-making processes lead to gender-discriminatory outcomes due to the operations of statistical bias (whether it be predictions of recidivism rates for offenders, identification of high-risk borrowers in credit systems, or targeting of job ads over social network platforms).
‘AI patriarchy’ is intrinsic to the re-organisation of societies in digital times. Recognising this as a starting point to connect with the participants, this workshop will move further on to tackling concrete policy questions:
1. Do guidelines and standards on algorithmic bias and ethical AI developed / being developed by the technical community have a gender perspective? What issues need more attention with respect to overcoming unintended and unjustifiable gender bias in the development of automated systems?
2. How do existing pieces of legislation on algorithmic accountability fare with respect to addressing this issue?
3. What accountability arrangements do we need to put in place to ensure that the use of automated decision-making in public systems does not reinforce existing patterns of gendered exclusion?
In addition to listening to the lightning talks, remote participants can be part of the break-out group discussion. There will be a single remote break-out group which is facilitated by the online moderator who will collate the key highlights of the discussion into a presentation/report-back to be shared with the other groups who are physically present at the workshop.