Fair Regression under Sample Selection Bias
- URL: http://arxiv.org/abs/2110.04372v1
- Date: Fri, 8 Oct 2021 20:35:12 GMT
- Title: Fair Regression under Sample Selection Bias
- Authors: Wei Du, Xintao Wu, Hanghang Tong
- Abstract summary: We develop a framework for fair regression under sample selection bias.
Our framework adopts the classic Heckman model for bias correction and the Lagrange duality.
We conduct experiments on three real-world datasets and the experimental results demonstrate the approach's effectiveness.
- Score: 45.41696144835412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research on fair regression focused on developing new fairness notions
and approximation methods as target variables and even the sensitive attribute
are continuous in the regression setting. However, all previous fair regression
research assumed the training data and testing data are drawn from the same
distributions. This assumption is often violated in real world due to the
sample selection bias between the training and testing data. In this paper, we
develop a framework for fair regression under sample selection bias when
dependent variable values of a set of samples from the training data are
missing as a result of another hidden process. Our framework adopts the classic
Heckman model for bias correction and the Lagrange duality to achieve fairness
in regression based on a variety of fairness notions. Heckman model describes
the sample selection process and uses a derived variable called the Inverse
Mills Ratio (IMR) to correct sample selection bias. We use fairness inequality
and equality constraints to describe a variety of fairness notions and apply
the Lagrange duality theory to transform the primal problem into the dual
convex optimization. For the two popular fairness notions, mean difference and
mean squared error difference, we derive explicit formulas without iterative
optimization, and for Pearson correlation, we derive its conditions of
achieving strong duality. We conduct experiments on three real-world datasets
and the experimental results demonstrate the approach's effectiveness in terms
of both utility and fairness metrics.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - fAux: Testing Individual Fairness via Gradient Alignment [2.5329739965085785]
We describe a new approach for testing individual fairness that does not have either requirement.
We show that the proposed method effectively identifies discrimination on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-10-10T21:27:20Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Robust Fairness-aware Learning Under Sample Selection Bias [17.09665420515772]
We propose a framework for robust and fair learning under sample selection bias.
We develop two algorithms to handle sample selection bias when test data is both available and unavailable.
arXiv Detail & Related papers (2021-05-24T23:23:36Z) - Achieving Equalized Odds by Resampling Sensitive Attributes [13.114114427206678]
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
This differentiable functional is used as a penalty driving the model parameters towards equalized odds.
We develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature.
arXiv Detail & Related papers (2020-06-08T00:18:34Z) - On conditional versus marginal bias in multi-armed bandits [105.07190334523304]
The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis.
We characterize the sign of the conditional bias of monotone functions of the rewards, including the sample mean.
Our results hold for arbitrary conditioning events and leverage natural monotonicity properties of the data collection policy.
arXiv Detail & Related papers (2020-02-19T20:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.