Selection in the Presence of Implicit Bias: The Advantage of
Intersectional Constraints
- URL: http://arxiv.org/abs/2202.01661v2
- Date: Tue, 7 Jun 2022 05:40:21 GMT
- Title: Selection in the Presence of Implicit Bias: The Advantage of
Intersectional Constraints
- Authors: Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi
- Abstract summary: In selection processes such as hiring, promotion, and college admissions, implicit bias toward socially-salient attributes is known to produce persistent inequality.
We show that, in the intersectional case, the aforementioned non-intersectional constraints can only recover part of the total utility achievable in the absence of implicit bias.
- Score: 21.230595548980574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In selection processes such as hiring, promotion, and college admissions,
implicit bias toward socially-salient attributes such as race, gender, or
sexual orientation of candidates is known to produce persistent inequality and
reduce aggregate utility for the decision maker. Interventions such as the
Rooney Rule and its generalizations, which require the decision maker to select
at least a specified number of individuals from each affected group, have been
proposed to mitigate the adverse effects of implicit bias in selection. Recent
works have established that such lower-bound constraints can be very effective
in improving aggregate utility in the case when each individual belongs to at
most one affected group. However, in several settings, individuals may belong
to multiple affected groups and, consequently, face more extreme implicit bias
due to this intersectionality. We consider independently drawn utilities and
show that, in the intersectional case, the aforementioned non-intersectional
constraints can only recover part of the total utility achievable in the
absence of implicit bias. On the other hand, we show that if one includes
appropriate lower-bound constraints on the intersections, almost all the
utility achievable in the absence of implicit bias can be recovered. Thus,
intersectional constraints can offer a significant advantage over a
reductionist dimension-by-dimension non-intersectional approach to reducing
inequality.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Harm Ratio: A Novel and Versatile Fairness Criterion [27.18270261374462]
Envy-freeness has become the cornerstone of fair division research.
We propose a novel fairness criterion, individual harm ratio, inspired by envy-freeness.
Our criterion is powerful enough to differentiate between prominent decision-making algorithms.
arXiv Detail & Related papers (2024-10-03T20:36:05Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - Quota-based debiasing can decrease representation of already
underrepresented groups [5.1135133995376085]
We show that quota-based debiasing based on a single attribute can worsen the representation of already underrepresented groups and decrease overall fairness of selection.
Our results demonstrate the importance of including all relevant attributes in debiasing procedures and that more efforts need to be put into eliminating the root causes of inequalities.
arXiv Detail & Related papers (2020-06-13T14:26:42Z) - Interventions for Ranking in the Presence of Implicit Bias [34.23230188778088]
Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
arXiv Detail & Related papers (2020-01-23T19:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.