ABCinML: Anticipatory Bias Correction in Machine Learning Applications
- URL: http://arxiv.org/abs/2206.06960v1
- Date: Tue, 14 Jun 2022 16:26:10 GMT
- Title: ABCinML: Anticipatory Bias Correction in Machine Learning Applications
- Authors: Abdulaziz A. Almuzaini, Chidansh A. Bhatt, David M. Pennock, Vivek K.
Singh
- Abstract summary: We propose an anticipatory dynamic learning approach for correcting the algorithm to mitigate bias before it occurs.
Results from experiments over multiple real-world datasets suggest that this approach has promise for anticipatory bias correction.
- Score: 9.978142416219294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The idealization of a static machine-learned model, trained once and deployed
forever, is not practical. As input distributions change over time, the model
will not only lose accuracy, any constraints to reduce bias against a protected
class may fail to work as intended. Thus, researchers have begun to explore
ways to maintain algorithmic fairness over time. One line of work focuses on
dynamic learning: retraining after each batch, and the other on robust learning
which tries to make algorithms robust against all possible future changes.
Dynamic learning seeks to reduce biases soon after they have occurred and
robust learning often yields (overly) conservative models. We propose an
anticipatory dynamic learning approach for correcting the algorithm to mitigate
bias before it occurs. Specifically, we make use of anticipations regarding the
relative distributions of population subgroups (e.g., relative ratios of male
and female applicants) in the next cycle to identify the right parameters for
an importance weighing fairness approach. Results from experiments over
multiple real-world datasets suggest that this approach has promise for
anticipatory bias correction.
Related papers
- Improving Fairness in Credit Lending Models using Subgroup Threshold Optimization [0.0]
We introduce a new fairness technique called textitSubgroup Threshold (textitSTO)
STO works by optimizing the classification thresholds for individual subgroups in order to minimize the overall discrimination score between them.
Our experiments on a real-world credit lending dataset show that STO can reduce gender discrimination by over 90%.
arXiv Detail & Related papers (2024-03-15T19:36:56Z) - Ask Your Distribution Shift if Pre-Training is Right for You [74.18516460467019]
In practice, fine-tuning a pre-trained model improves robustness significantly in some cases but not at all in others.
We focus on two possible failure modes of models under distribution shift: poor extrapolation and biases in the training data.
Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases.
arXiv Detail & Related papers (2024-02-29T23:46:28Z) - Debiasing Machine Learning Models by Using Weakly Supervised Learning [3.3298048942057523]
We tackle the problem of bias mitigation of algorithmic decisions in a setting where both the output of the algorithm and the sensitive variable are continuous.
Typical examples are unfair decisions made with respect to the age or the financial status.
Our bias mitigation strategy is a weakly supervised learning method which requires that a small portion of the data can be measured in a fair manner.
arXiv Detail & Related papers (2024-02-23T18:11:32Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Neural Active Learning on Heteroskedastic Distributions [29.01776999862397]
We demonstrate the catastrophic failure of active learning algorithms on heteroskedastic datasets.
We propose a new algorithm that incorporates a model difference scoring function for each data point to filter out the noisy examples and sample clean examples.
arXiv Detail & Related papers (2022-11-02T07:30:19Z) - Uncertainty Estimation for Language Reward Models [5.33024001730262]
Language models can learn a range of capabilities from unsupervised training on text corpora.
It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons.
We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning.
arXiv Detail & Related papers (2022-03-14T20:13:21Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.