Interventions for Ranking in the Presence of Implicit Bias
- URL: http://arxiv.org/abs/2001.08767v1
- Date: Thu, 23 Jan 2020 19:11:31 GMT
- Title: Interventions for Ranking in the Presence of Implicit Bias
- Authors: L. Elisa Celis and Anay Mehrotra and Nisheeth K. Vishnoi
- Abstract summary: Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
- Score: 34.23230188778088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit bias is the unconscious attribution of particular qualities (or lack
thereof) to a member from a particular social group (e.g., defined by gender or
race). Studies on implicit bias have shown that these unconscious stereotypes
can have adverse outcomes in various social contexts, such as job screening,
teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a
mathematical model for implicit bias and showed the effectiveness of the Rooney
Rule as a constraint to improve the utility of the outcome for certain cases of
the subset selection problem. Here we study the problem of designing
interventions for the generalization of subset selection -- ranking -- that
requires to output an ordered set and is a central primitive in various social
and computational contexts. We present a family of simple and interpretable
constraints and show that they can optimally mitigate implicit bias for a
generalization of the model studied in (Kleinberg and Raghavan, 2018).
Subsequently, we prove that under natural distributional assumptions on the
utilities of items, simple, Rooney Rule-like, constraints can also surprisingly
recover almost all the utility lost due to implicit biases. Finally, we augment
our theoretical results with empirical findings on real-world distributions
from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - A Principled Approach for a New Bias Measure [7.352247786388098]
We propose the definition of Uniform Bias (UB), the first bias measure with a clear and simple interpretation in the full range of bias values.
Our results are experimentally validated using nine publicly available datasets and theoretically analyzed, which provide novel insights about the problem.
Based on our approach, we also design a bias mitigation model that might be useful to policymakers.
arXiv Detail & Related papers (2024-05-20T18:14:33Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Race Bias Analysis of Bona Fide Errors in face anti-spoofing [0.0]
We present a systematic study of race bias in face anti-spoofing with three key characteristics.
The focus is on analysing potential bias in the bona fide errors, where significant ethical and legal issues lie.
We demonstrate the proposed bias analysis process on a VQ-VAE based face anti-spoofing algorithm.
arXiv Detail & Related papers (2022-10-11T11:49:24Z) - Ensembling over Classifiers: a Bias-Variance Perspective [13.006468721874372]
We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers.
We show that conditional estimates necessarily incur an irreducible error.
Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.
arXiv Detail & Related papers (2022-06-21T17:46:35Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Mitigating Gender Bias Amplification in Distribution by Posterior
Regularization [75.3529537096899]
We investigate the gender bias amplification issue from the distribution perspective.
We propose a bias mitigation approach based on posterior regularization.
Our study sheds the light on understanding the bias amplification.
arXiv Detail & Related papers (2020-05-13T11:07:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.