Does enforcing fairness mitigate biases caused by subpopulation shift?
- URL: http://arxiv.org/abs/2011.03173v2
- Date: Tue, 26 Oct 2021 20:20:35 GMT
- Title: Does enforcing fairness mitigate biases caused by subpopulation shift?
- Authors: Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin and Yuekai Sun
- Abstract summary: We study whether enforcing algorithmic fairness during training improves the performance of the trained model in the emphtarget domain
We derive necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain.
- Score: 45.51706479763718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many instances of algorithmic bias are caused by subpopulation shifts. For
example, ML models often perform worse on demographic groups that are
underrepresented in the training data. In this paper, we study whether
enforcing algorithmic fairness during training improves the performance of the
trained model in the \emph{target domain}. On one hand, we conceive scenarios
in which enforcing fairness does not improve performance in the target domain.
In fact, it may even harm performance. On the other hand, we derive necessary
and sufficient conditions under which enforcing algorithmic fairness leads to
the Bayes model in the target domain. We also illustrate the practical
implications of our theoretical results in simulations and on real data.
Related papers
- Learning Counterfactually Fair Models via Improved Generation with Neural Causal Models [0.0]
One of the main concerns while deploying machine learning models in real-world applications is fairness.
Existing methodologies for enforcing counterfactual fairness seem to have two limitations.
We propose employing Neural Causal Models for generating the counterfactual samples.
We also propose a new MMD-based regularizer term that explicitly enforces the counterfactual fairness conditions into the base model while training.
arXiv Detail & Related papers (2025-02-18T11:59:03Z) - Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - Fairness and Accuracy under Domain Generalization [10.661409428935494]
Concerns have arisen that machine learning algorithms may be biased against certain social groups.
Many approaches have been proposed to make ML models fair, but they typically rely on the assumption that data distributions in training and deployment are identical.
We study the transfer of both fairness and accuracy under domain generalization where the data at test time may be sampled from never-before-seen domains.
arXiv Detail & Related papers (2023-01-30T23:10:17Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.