Re-imagining Algorithmic Fairness in India and Beyond
- URL: http://arxiv.org/abs/2101.09995v2
- Date: Wed, 27 Jan 2021 02:30:20 GMT
- Title: Re-imagining Algorithmic Fairness in India and Beyond
- Authors: Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi,
Vinodkumar Prabhakaran
- Abstract summary: We de-center algorithmic fairness and analyse AI power in India.
We find that data is not always reliable due to socio-economic factors.
We provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.
- Score: 9.667710168953239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional algorithmic fairness is West-centric, as seen in its sub-groups,
values, and methods. In this paper, we de-center algorithmic fairness and
analyse AI power in India. Based on 36 qualitative interviews and a discourse
analysis of algorithmic deployments in India, we find that several assumptions
of algorithmic fairness are challenged. We find that in India, data is not
always reliable due to socio-economic factors, ML makers appear to follow
double standards, and AI evokes unquestioning aspiration. We contend that
localising model fairness alone can be window dressing in India, where the
distance between models and oppressed communities is large. Instead, we
re-imagine algorithmic fairness in India and provide a roadmap to
re-contextualise data and models, empower oppressed communities, and enable
Fair-ML ecosystems.
Related papers
- (Unfair) Norms in Fairness Research: A Meta-Analysis [6.395584220342517]
We conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics.
Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research.
Second, fairness studies exhibit a widespread reliance on binary codifications of human identity.
arXiv Detail & Related papers (2024-06-17T17:14:47Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Are Models Trained on Indian Legal Data Fair? [20.162205920441895]
We present an initial investigation of fairness from the Indian perspective in the legal domain.
We show that a decision tree model trained for the bail prediction task has an overall fairness disparity of 0.237 between input features associated with Hindus and Muslims.
arXiv Detail & Related papers (2023-03-13T16:20:33Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Re-contextualizing Fairness in NLP: The Case of India [9.919007681131804]
We focus on NLP fair-ness in the context of India.
We build resources for fairness evaluation in the Indian context.
We then delve deeper into social stereotypes for Region andReligion, demonstrating its prevalence in corpora and models.
arXiv Detail & Related papers (2022-09-25T13:56:13Z) - Decoding Demographic un-fairness from Indian Names [4.402336973466853]
Demographic classification is essential in fairness assessment in recommender systems or in measuring unintended bias in online networks and voting systems.
We collect three publicly available datasets to train state-of-the-art classifiers in the domain of gender and caste classification.
We perform cross-testing (training and testing on different datasets) to understand the efficacy of the above models.
arXiv Detail & Related papers (2022-09-07T11:54:49Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Representative & Fair Synthetic Data [68.8204255655161]
We present a framework to incorporate fairness constraints into the self-supervised learning process.
We generate a representative as well as fair version of the UCI Adult census data set.
We consider representative & fair synthetic data a promising future building block to teach algorithms not on historic worlds, but rather on the worlds that we strive to live in.
arXiv Detail & Related papers (2021-04-07T09:19:46Z) - Non-portability of Algorithmic Fairness in India [9.8164690355257]
We argue that a mere translation of technical fairness work to Indian subgroups may serve only as a window dressing.
We argue that a collective re-imagining of Fair-ML, by re-contextualising data and models, empowering oppressed communities, and more importantly, enabling ecosystems.
arXiv Detail & Related papers (2020-12-03T23:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.