Ensuring Fairness Beyond the Training Data
- URL: http://arxiv.org/abs/2007.06029v2
- Date: Wed, 4 Nov 2020 15:52:43 GMT
- Title: Ensuring Fairness Beyond the Training Data
- Authors: Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, and
Daniel Hsu
- Abstract summary: We develop classifiers that are fair with respect to the training distribution and for a class of perturbations.
Based on online learning algorithm, we develop an iterative algorithm that converges to a fair and robust solution.
Our experiments show that there is an inherent trade-off between fairness and accuracy of such classifiers.
- Score: 22.284777913437182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We initiate the study of fair classifiers that are robust to perturbations in
the training distribution. Despite recent progress, the literature on fairness
has largely ignored the design of fair and robust classifiers. In this work, we
develop classifiers that are fair not only with respect to the training
distribution, but also for a class of distributions that are weighted
perturbations of the training samples. We formulate a min-max objective
function whose goal is to minimize a distributionally robust training loss, and
at the same time, find a classifier that is fair with respect to a class of
distributions. We first reduce this problem to finding a fair classifier that
is robust with respect to the class of distributions. Based on online learning
algorithm, we develop an iterative algorithm that provably converges to such a
fair and robust solution. Experiments on standard machine learning fairness
datasets suggest that, compared to the state-of-the-art fair classifiers, our
classifier retains fairness guarantees and test accuracy for a large class of
perturbations on the test set. Furthermore, our experiments show that there is
an inherent trade-off between fairness robustness and accuracy of such
classifiers.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Fair Class-Incremental Learning using Sample Weighting [27.82760149957115]
We show that naively using all the samples of the current task for training results in unfair catastrophic forgetting for certain sensitive groups including classes.
We propose a fair class-incremental learning framework that adjusts the training weights of current task samples to change the direction of the average gradient vector.
Experiments show that FSW achieves better accuracy-fairness tradeoff results than state-of-the-art approaches on real datasets.
arXiv Detail & Related papers (2024-10-02T08:32:21Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning [49.94607673097326]
We propose a highly adaptable framework, designated as SimPro, which does not rely on any predefined assumptions about the distribution of unlabeled data.
Our framework, grounded in a probabilistic model, innovatively refines the expectation-maximization algorithm.
Our method showcases consistent state-of-the-art performance across diverse benchmarks and data distribution scenarios.
arXiv Detail & Related papers (2024-02-21T03:39:04Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Robust Fairness under Covariate Shift [11.151913007808927]
Making predictions that are fair with regard to protected group membership has become an important requirement for classification algorithms.
We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance.
arXiv Detail & Related papers (2020-10-11T04:42:01Z) - A Distributionally Robust Approach to Fair Classification [17.759493152879013]
We propose a robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity.
This model is equivalent to a tractable convex optimization problem if a Wasserstein ball centered at the empirical distribution on the training data is used to model distributional uncertainty.
We demonstrate that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets.
arXiv Detail & Related papers (2020-07-18T22:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.