How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts
- URL: http://arxiv.org/abs/2207.01168v1
- Date: Mon, 4 Jul 2022 02:37:50 GMT
- Title: How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts
- Authors: Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang
- Abstract summary: We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
- Score: 107.72786199113183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing concerns have been raised on deep learning fairness in recent
years. Existing fairness-aware machine learning methods mainly focus on the
fairness of in-distribution data. However, in real-world applications, it is
common to have distribution shift between the training and test data. In this
paper, we first show that the fairness achieved by existing methods can be
easily broken by slight distribution shifts. To solve this problem, we propose
a novel fairness learning method termed CUrvature MAtching (CUMA), which can
achieve robust fairness generalizable to unseen domains with unknown
distributional shifts. Specifically, CUMA enforces the model to have similar
generalization ability on the majority and minority groups, by matching the
loss curvature distributions of the two groups. We evaluate our method on three
popular fairness datasets. Compared with existing methods, CUMA achieves
superior fairness under unseen distribution shifts, without sacrificing either
the overall accuracy or the in-distribution fairness.
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Transferring Fairness under Distribution Shifts via Fair Consistency
Regularization [15.40257564187799]
We study how to transfer model fairness under distribution shifts, a widespread issue in practice.
Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness.
arXiv Detail & Related papers (2022-06-26T06:19:56Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z) - Ensuring Fairness Beyond the Training Data [22.284777913437182]
We develop classifiers that are fair with respect to the training distribution and for a class of perturbations.
Based on online learning algorithm, we develop an iterative algorithm that converges to a fair and robust solution.
Our experiments show that there is an inherent trade-off between fairness and accuracy of such classifiers.
arXiv Detail & Related papers (2020-07-12T16:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.