Fairness Transferability Subject to Bounded Distribution Shift
- URL: http://arxiv.org/abs/2206.00129v3
- Date: Thu, 15 Dec 2022 22:32:26 GMT
- Title: Fairness Transferability Subject to Bounded Distribution Shift
- Authors: Yatong Chen, Reilly Raab, Jialu Wang, Yang Liu
- Abstract summary: Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
- Score: 5.62716254065607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given an algorithmic predictor that is "fair" on some source distribution,
will it still be fair on an unknown target distribution that differs from the
source within some bound? In this paper, we study the transferability of
statistical group fairness for machine learning predictors (i.e., classifiers
or regressors) subject to bounded distribution shifts. Such shifts may be
introduced by initial training data uncertainties, user adaptation to a
deployed predictor, dynamic environments, or the use of pre-trained models in
new settings. Herein, we develop a bound that characterizes such
transferability, flagging potentially inappropriate deployments of machine
learning for socially consequential tasks. We first develop a framework for
bounding violations of statistical fairness subject to distribution shift,
formulating a generic upper bound for transferred fairness violations as our
primary result. We then develop bounds for specific worked examples, focusing
on two commonly used fairness definitions (i.e., demographic parity and
equalized odds) and two classes of distribution shift (i.e., covariate shift
and label shift). Finally, we compare our theoretical bounds to deterministic
models of distribution shift and against real-world data, finding that we are
able to estimate fairness violation bounds in practice, even when simplifying
assumptions are only approximately satisfied.
Related papers
- Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Transferring Fairness under Distribution Shifts via Fair Consistency
Regularization [15.40257564187799]
We study how to transfer model fairness under distribution shifts, a widespread issue in practice.
Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness.
arXiv Detail & Related papers (2022-06-26T06:19:56Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Robust Fairness under Covariate Shift [11.151913007808927]
Making predictions that are fair with regard to protected group membership has become an important requirement for classification algorithms.
We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance.
arXiv Detail & Related papers (2020-10-11T04:42:01Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Calibrated Prediction with Covariate Shift via Unsupervised Domain
Adaptation [25.97333838935589]
Uncertainty estimates are an important tool for helping autonomous agents or human decision makers understand and leverage predictive models.
Existing algorithms can overestimate certainty, possibly yielding a false sense of confidence in the predictive model.
arXiv Detail & Related papers (2020-02-29T20:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.