Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously
- URL: http://arxiv.org/abs/2311.13816v2
- Date: Tue, 21 May 2024 13:51:59 GMT
- Title: Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously
- Authors: Chen Zhao, Kai Jiang, Xintao Wu, Haoliang Wang, Latifur Khan, Christan Grant, Feng Chen,
- Abstract summary: We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
- Score: 28.24666589680547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The endeavor to preserve the generalization of a fair and invariant classifier across domains, especially in the presence of distribution shifts, becomes a significant and intricate challenge in machine learning. In response to this challenge, numerous effective algorithms have been developed with a focus on addressing the problem of fairness-aware domain generalization. These algorithms are designed to navigate various types of distribution shifts, with a particular emphasis on covariate and dependence shifts. In this context, covariate shift pertains to changes in the marginal distribution of input features, while dependence shift involves alterations in the joint distribution of the label variable and sensitive attributes. In this paper, we introduce a simple but effective approach that aims to learn a fair and invariant classifier by simultaneously addressing both covariate and dependence shifts across domains. We assert the existence of an underlying transformation model can transform data from one domain to another, while preserving the semantics related to non-sensitive attributes and classes. By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains. This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns. Extensive empirical studies on four benchmark datasets demonstrate that our approach surpasses state-of-the-art methods.
Related papers
- FEED: Fairness-Enhanced Meta-Learning for Domain Generalization [13.757379847454372]
Generalizing to out-of-distribution data while aware of model fairness is a significant and challenging problem in meta-learning.
This paper introduces an approach to fairness-aware meta-learning that significantly enhances domain generalization capabilities.
arXiv Detail & Related papers (2024-11-02T17:34:33Z) - Learning Fair Invariant Representations under Covariate and Correlation Shifts Simultaneously [10.450977234741524]
We introduce a novel approach that focuses on learning a fairness-aware domain-invariant predictor.
Our approach surpasses state-of-the-art methods with respect to model accuracy as well as both group and individual fairness.
arXiv Detail & Related papers (2024-08-18T00:01:04Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Graphs Generalization under Distribution Shifts [11.963958151023732]
We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
arXiv Detail & Related papers (2024-03-25T00:15:34Z) - Identifiable Latent Causal Content for Domain Adaptation under Latent Covariate Shift [82.14087963690561]
Multi-source domain adaptation (MSDA) addresses the challenge of learning a label prediction function for an unlabeled target domain.
We present an intricate causal generative model by introducing latent noises across domains, along with a latent content variable and a latent style variable.
The proposed approach showcases exceptional performance and efficacy on both simulated and real-world datasets.
arXiv Detail & Related papers (2022-08-30T11:25:15Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Contrastive ACE: Domain Generalization Through Alignment of Causal
Mechanisms [34.99779761100095]
Domain generalization aims to learn knowledge invariant across different distributions.
We consider the causal invariance of the average causal effect of the features to the labels.
arXiv Detail & Related papers (2021-06-02T04:01:22Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.