Non-portability of Algorithmic Fairness in India
- URL: http://arxiv.org/abs/2012.03659v2
- Date: Tue, 8 Dec 2020 20:10:12 GMT
- Title: Non-portability of Algorithmic Fairness in India
- Authors: Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar
Prabhakaran
- Abstract summary: We argue that a mere translation of technical fairness work to Indian subgroups may serve only as a window dressing.
We argue that a collective re-imagining of Fair-ML, by re-contextualising data and models, empowering oppressed communities, and more importantly, enabling ecosystems.
- Score: 9.8164690355257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conventional algorithmic fairness is Western in its sub-groups, values, and
optimizations. In this paper, we ask how portable the assumptions of this
largely Western take on algorithmic fairness are to a different geo-cultural
context such as India. Based on 36 expert interviews with Indian scholars, and
an analysis of emerging algorithmic deployments in India, we identify three
clusters of challenges that engulf the large distance between machine learning
models and oppressed communities in India. We argue that a mere translation of
technical fairness work to Indian subgroups may serve only as a window
dressing, and instead, call for a collective re-imagining of Fair-ML, by
re-contextualising data and models, empowering oppressed communities, and more
importantly, enabling ecosystems.
Related papers
- Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Are Models Trained on Indian Legal Data Fair? [20.162205920441895]
We present an initial investigation of fairness from the Indian perspective in the legal domain.
We show that a decision tree model trained for the bail prediction task has an overall fairness disparity of 0.237 between input features associated with Hindus and Muslims.
arXiv Detail & Related papers (2023-03-13T16:20:33Z) - Cultural Re-contextualization of Fairness Research in Language
Technologies in India [9.919007681131804]
Recent research has revealed undesirable biases in NLP data and models.
We re-contextualize fairness research for the Indian context, accounting for Indian societal context.
We also summarize findings from an empirical study on various social biases along different axes of disparities relevant to India.
arXiv Detail & Related papers (2022-11-21T06:37:45Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Re-contextualizing Fairness in NLP: The Case of India [9.919007681131804]
We focus on NLP fair-ness in the context of India.
We build resources for fairness evaluation in the Indian context.
We then delve deeper into social stereotypes for Region andReligion, demonstrating its prevalence in corpora and models.
arXiv Detail & Related papers (2022-09-25T13:56:13Z) - Decoding Demographic un-fairness from Indian Names [4.402336973466853]
Demographic classification is essential in fairness assessment in recommender systems or in measuring unintended bias in online networks and voting systems.
We collect three publicly available datasets to train state-of-the-art classifiers in the domain of gender and caste classification.
We perform cross-testing (training and testing on different datasets) to understand the efficacy of the above models.
arXiv Detail & Related papers (2022-09-07T11:54:49Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Utilizing Wordnets for Cognate Detection among Indian Languages [50.83320088758705]
We detect cognate word pairs among ten Indian languages with Hindi.
We use deep learning methodologies to predict whether a word pair is cognate or not.
We report improved performance of up to 26%.
arXiv Detail & Related papers (2021-12-30T16:46:28Z) - Re-imagining Algorithmic Fairness in India and Beyond [9.667710168953239]
We de-center algorithmic fairness and analyse AI power in India.
We find that data is not always reliable due to socio-economic factors.
We provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.
arXiv Detail & Related papers (2021-01-25T10:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.