Interventions for Ranking in the Presence of Implicit Bias
- URL: http://arxiv.org/abs/2001.08767v1
- Date: Thu, 23 Jan 2020 19:11:31 GMT
- Title: Interventions for Ranking in the Presence of Implicit Bias
- Authors: L. Elisa Celis and Anay Mehrotra and Nisheeth K. Vishnoi
- Abstract summary: Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
- Score: 34.23230188778088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit bias is the unconscious attribution of particular qualities (or lack
thereof) to a member from a particular social group (e.g., defined by gender or
race). Studies on implicit bias have shown that these unconscious stereotypes
can have adverse outcomes in various social contexts, such as job screening,
teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a
mathematical model for implicit bias and showed the effectiveness of the Rooney
Rule as a constraint to improve the utility of the outcome for certain cases of
the subset selection problem. Here we study the problem of designing
interventions for the generalization of subset selection -- ranking -- that
requires to output an ordered set and is a central primitive in various social
and computational contexts. We present a family of simple and interpretable
constraints and show that they can optimally mitigate implicit bias for a
generalization of the model studied in (Kleinberg and Raghavan, 2018).
Subsequently, we prove that under natural distributional assumptions on the
utilities of items, simple, Rooney Rule-like, constraints can also surprisingly
recover almost all the utility lost due to implicit biases. Finally, we augment
our theoretical results with empirical findings on real-world distributions
from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.
Related papers
- The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - A Principled Approach for a New Bias Measure [5.128782192362636]
We develop an algorithmic framework for defining and efficiently quantifying the bias level of a dataset with respect to a protected group.
We also derive a bias mitigation algorithm that might be useful to policymakers.
arXiv Detail & Related papers (2024-05-20T18:14:33Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Race Bias Analysis of Bona Fide Errors in face anti-spoofing [0.0]
We present a systematic study of race bias in face anti-spoofing with three key characteristics.
The focus is on analysing potential bias in the bona fide errors, where significant ethical and legal issues lie.
We demonstrate the proposed bias analysis process on a VQ-VAE based face anti-spoofing algorithm.
arXiv Detail & Related papers (2022-10-11T11:49:24Z) - Ensembling over Classifiers: a Bias-Variance Perspective [13.006468721874372]
We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers.
We show that conditional estimates necessarily incur an irreducible error.
Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.
arXiv Detail & Related papers (2022-06-21T17:46:35Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Mitigating Gender Bias Amplification in Distribution by Posterior
Regularization [75.3529537096899]
We investigate the gender bias amplification issue from the distribution perspective.
We propose a bias mitigation approach based on posterior regularization.
Our study sheds the light on understanding the bias amplification.
arXiv Detail & Related papers (2020-05-13T11:07:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.