On Fair Selection in the Presence of Implicit Variance
- URL: http://arxiv.org/abs/2006.13699v1
- Date: Wed, 24 Jun 2020 13:08:31 GMT
- Title: On Fair Selection in the Presence of Implicit Variance
- Authors: Vitalii Emelianov, Nicolas Gast, Krishna P. Gummadi and Patrick
Loiseau
- Abstract summary: We argue that even in the absence of implicit bias, the estimates of candidates' quality from different groups may differ in another fundamental way, namely, in their variance.
We propose a simple model in which candidates have a true latent quality that is drawn from a group-independent normal distribution.
We show that the demographic parity mechanism always increases the selection utility, while any $gamma$-rule weakly increases it.
- Score: 17.517529275692322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quota-based fairness mechanisms like the so-called Rooney rule or four-fifths
rule are used in selection problems such as hiring or college admission to
reduce inequalities based on sensitive demographic attributes. These mechanisms
are often viewed as introducing a trade-off between selection fairness and
utility. In recent work, however, Kleinberg and Raghavan showed that, in the
presence of implicit bias in estimating candidates' quality, the Rooney rule
can increase the utility of the selection process.
We argue that even in the absence of implicit bias, the estimates of
candidates' quality from different groups may differ in another fundamental
way, namely, in their variance. We term this phenomenon implicit variance and
we ask: can fairness mechanisms be beneficial to the utility of a selection
process in the presence of implicit variance (even in the absence of implicit
bias)? To answer this question, we propose a simple model in which candidates
have a true latent quality that is drawn from a group-independent normal
distribution. To make the selection, a decision maker receives an unbiased
estimate of the quality of each candidate, with normal noise, but whose
variance depends on the candidate's group. We then compare the utility obtained
by imposing a fairness mechanism that we term $\gamma$-rule (it includes
demographic parity and the four-fifths rule as special cases), to that of a
group-oblivious selection algorithm that picks the candidates with the highest
estimated quality independently of their group. Our main result shows that the
demographic parity mechanism always increases the selection utility, while any
$\gamma$-rule weakly increases it. We extend our model to a two-stage selection
process where the true quality is observed at the second stage. We discuss
multiple extensions of our results, in particular to different distributions of
the true latent quality.
Related papers
- Differentiating Choices via Commonality for Multiple-Choice Question Answering [54.04315943420376]
Multiple-choice question answering can provide valuable clues for choosing the right answer.
Existing models often rank each choice separately, overlooking the context provided by other choices.
We propose a novel model by differentiating choices through identifying and eliminating their commonality, called DCQA.
arXiv Detail & Related papers (2024-08-21T12:05:21Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairly Allocating Utility in Constrained Multiwinner Elections [0.0]
A common denominator to ensure fairness across all such contexts is the use of constraints.
Across these contexts, the candidates selected to satisfy the given constraints may systematically lead to unfair outcomes for historically disadvantaged voter populations.
We develop a model to select candidates that satisfy the constraints fairly across voter populations.
arXiv Detail & Related papers (2022-11-23T10:04:26Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Group Meritocratic Fairness in Linear Contextual Bandits [32.15680917495674]
We study the linear contextual bandit problem where an agent has to select one candidate from a pool and each candidate belongs to a sensitive group.
We propose a notion of fairness that states that the agent's policy is fair when it selects a candidate with highest relative rank.
arXiv Detail & Related papers (2022-06-07T09:54:38Z) - Fairness in Selection Problems with Strategic Candidates [9.4148805532663]
We study how the strategic aspect affects fairness in selection problems.
A population of rational candidates compete by choosing an effort level to increase their quality.
We characterize the (unique) equilibrium of this game in the different parameters' regimes.
arXiv Detail & Related papers (2022-05-24T17:03:32Z) - On Fair Selection in the Presence of Implicit and Differential Variance [22.897402186120434]
We study a model where the decision maker receives a noisy estimate of each candidate's quality, whose variance depends on the candidate's group.
We show that both baseline decision makers yield discrimination, although in opposite directions.
arXiv Detail & Related papers (2021-12-10T16:04:13Z) - Fair Sequential Selection Using Supervised Learning Models [11.577534539649374]
We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs.
We show that even with a pre-trained model that satisfies the common fairness notions, the selection outcomes may still be biased against certain demographic groups.
We introduce a new fairness notion, Equal Selection (ES),'' suitable for sequential selection problems and propose a post-processing approach to satisfy the ES fairness notion.
arXiv Detail & Related papers (2021-10-26T19:45:26Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.