Title: Gender Justice through Ethical Algorithms: An AI-driven digital support system for tackling online misogyny in India

Author(s): Hameeda Syed
Type:
Final project
Year of publication:
2024
Specialisation:
Gender, Cyberspace and Artificial Intelligence (AI)
Supervisors: María Rún Bjarnadóttir

Abstract

Misogyny is cultural and finding new ways to manifest online, where harassment against women has increased in recent years. Women and gender minorities face backlash and violence online, leading them to withdraw from these spaces. The public is dissatisfied with how the platforms address the issue, with 79% believing they are not doing enough. AI-driven moderation is commonly employed, but these models are trained on data from English-speaking cultures, i.e., the Global North, leaving a significant gap in recognizing misogyny in contexts like India, where language and cultural expression of misogyny differ. Due to technical restraints and limited resources, many of these AI models are unilaterally adapted by resource-poor countries. This removes the agency from the survivors of Online Gender-Based Violence (OGBV) as they are excluded from the processes of determining the classification of harassment, and the subsequent actions taken.

To stop the misogyny at the rate of millions of posts/second in this context, our project, the Digital Support Ecosystem (DSE) seeks to address the need for datasets and AI models capable of recognising OGBV in India by creating a thick-big data platform that integrates digital anthropology and data science to ensure safer internet for women and gender minorities in India. Managed by Dignity in Difference, this interdisciplinary framework incorporates survivor knowledge and cultural insights into the technological process. It allows survivors to report OGBV incidents while recognizing the flaws of the classification process, enhancing its efficiency with cultural nuances. It will lead the community co-creation and support mechanisms through collaboration with high-influence stakeholders like policymakers, journalists, lawyers, researchers, data scientists and law enforcement. The DSE aims to develop a culturally trained algorithm capable of providing an early-warning system and resistance mechanisms against online misogyny leading to improved policy advocacy and interventions.

Documents and links