Transferring Fairness under Distribution Shifts via Fair Consistency
Regularization
- URL: http://arxiv.org/abs/2206.12796v3
- Date: Sat, 14 Jan 2023 21:15:42 GMT
- Title: Transferring Fairness under Distribution Shifts via Fair Consistency
Regularization
- Authors: Bang An, Zora Che, Mucong Ding, Furong Huang
- Abstract summary: We study how to transfer model fairness under distribution shifts, a widespread issue in practice.
Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness.
- Score: 15.40257564187799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing reliance on ML models in high-stakes tasks has raised a major
concern on fairness violations. Although there has been a surge of work that
improves algorithmic fairness, most of them are under the assumption of an
identical training and test distribution. In many real-world applications,
however, such an assumption is often violated as previously trained fair models
are often deployed in a different environment, and the fairness of such models
has been observed to collapse. In this paper, we study how to transfer model
fairness under distribution shifts, a widespread issue in practice. We conduct
a fine-grained analysis of how the fair model is affected under different types
of distribution shifts and find that domain shifts are more challenging than
subpopulation shifts. Inspired by the success of self-training in transferring
accuracy under domain shifts, we derive a sufficient condition for transferring
group fairness. Guided by it, we propose a practical algorithm with a fair
consistency regularization as the key component. A synthetic dataset benchmark,
which covers all types of distribution shifts, is deployed for experimental
verification of the theoretical findings. Experiments on synthetic and real
datasets including image and tabular data demonstrate that our approach
effectively transfers fairness and accuracy under various distribution shifts.
Related papers
- Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Certifying Some Distributional Fairness with Subpopulation Decomposition [20.009388617013986]
We first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem.
We then propose a general fairness certification framework and instantiate it for both sensitive shifting and general shifting scenarios.
Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios.
arXiv Detail & Related papers (2022-05-31T01:17:50Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.