Unintended Selection: Persistent Qualification Rate Disparities and
Interventions
- URL: http://arxiv.org/abs/2111.01201v1
- Date: Mon, 1 Nov 2021 18:53:54 GMT
- Title: Unintended Selection: Persistent Qualification Rate Disparities and
Interventions
- Authors: Reilly Raab, Yang Liu
- Abstract summary: We study the dynamics of group-level disparities in machine learning.
In particular, we desire models that do not suppose inherent differences between artificial groups of people.
We show that differences in qualification rates between subpopulations can persist indefinitely for a set of non-trivial equilibrium states.
- Score: 6.006936459950188
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Realistically -- and equitably -- modeling the dynamics of group-level
disparities in machine learning remains an open problem. In particular, we
desire models that do not suppose inherent differences between artificial
groups of people -- but rather endogenize disparities by appeal to unequal
initial conditions of insular subpopulations. In this paper, agents each have a
real-valued feature $X$ (e.g., credit score) informed by a "true" binary label
$Y$ representing qualification (e.g., for a loan). Each agent alternately (1)
receives a binary classification label $\hat{Y}$ (e.g., loan approval) from a
Bayes-optimal machine learning classifier observing $X$ and (2) may update
their qualification $Y$ by imitating successful strategies (e.g., seek a raise)
within an isolated group $G$ of agents to which they belong. We consider the
disparity of qualification rates $\Pr(Y=1)$ between different groups and how
this disparity changes subject to a sequence of Bayes-optimal classifiers
repeatedly retrained on the global population. We model the evolving
qualification rates of each subpopulation (group) using the replicator
equation, which derives from a class of imitation processes. We show that
differences in qualification rates between subpopulations can persist
indefinitely for a set of non-trivial equilibrium states due to uniformed
classifier deployments, even when groups are identical in all aspects except
initial qualification densities. We next simulate the effects of commonly
proposed fairness interventions on this dynamical system along with a new
feedback control mechanism capable of permanently eliminating group-level
qualification rate disparities. We conclude by discussing the limitations of
our model and findings and by outlining potential future work.
Related papers
- Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Delta-AI: Local objectives for amortized inference in sparse graphical models [64.5938437823851]
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs)
Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective.
We illustrate $Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
arXiv Detail & Related papers (2023-10-03T20:37:03Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - False membership rate control in mixture models [1.387448620257867]
A clustering task consists in partitioning elements of a sample into homogeneous groups.
In the supervised setting, this approach is well known and referred to as classification with an abstention option.
In this paper the approach is revisited in an unsupervised mixture model framework and the purpose is to develop a method that comes with the guarantee that the false membership rate does not exceed a pre-defined nominal level.
arXiv Detail & Related papers (2022-03-04T22:37:59Z) - KL Divergence Estimation with Multi-group Attribution [25.7757954754825]
Estimating the Kullback-Leibler (KL) divergence between two distributions is well-studied in machine learning and information theory.
Motivated by considerations of multi-group fairness, we seek KL divergence estimates that accurately reflect the contributions of sub-populations.
arXiv Detail & Related papers (2022-02-28T06:54:10Z) - Equity-Directed Bootstrapping: Examples and Analysis [3.007949058551534]
We show how an equity-directed bootstrap can bring test set sensitivities and specificities closer to satisfying the equal odds criterion.
In the context of na"ive Bayes and logistic regression, we analyze the equity-directed bootstrap, demonstrating that it works by bringing odds ratios close to one.
arXiv Detail & Related papers (2021-08-14T22:09:27Z) - Model Transferability With Responsive Decision Subjects [11.07759054787023]
We formalize the discussions of the transferability of a model by studying how the performance of the model trained on the available source distribution would translate to the performance on its induced domain.
We provide both upper bounds for the performance gap due to the induced domain shift, as well as lower bounds for the trade-offs that a classifier has to suffer on either the source training distribution or the induced target distribution.
arXiv Detail & Related papers (2021-07-13T08:21:37Z) - Binary Classification: Counterbalancing Class Imbalance by Applying
Regression Models in Combination with One-Sided Label Shifts [0.4970364068620607]
We introduce a novel method, which addresses the issues of class imbalance.
We generate a set of negative and positive target labels, such that the corresponding regression task becomes balanced.
We evaluate our approach on a number of publicly available data sets and compare our proposed method to one of the most popular oversampling techniques.
arXiv Detail & Related papers (2020-11-30T13:24:47Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.