Disentangled Representation with Causal Constraints for Counterfactual
Fairness
- URL: http://arxiv.org/abs/2208.09147v2
- Date: Sat, 16 Dec 2023 01:33:12 GMT
- Title: Disentangled Representation with Causal Constraints for Counterfactual
Fairness
- Authors: Ziqi Xu and Jixue Liu and Debo Cheng and Jiuyong Li and Lin Liu and Ke
Wang
- Abstract summary: This work theoretically demonstrates that using the structured representations enable downstream predictive models to achieve counterfactual fairness.
We propose the Counterfactual Fairness Variational AutoEncoder (CF-VAE) to obtain structured representations with respect to domain knowledge.
The experimental results show that the proposed method achieves better fairness and accuracy performance than the benchmark fairness methods.
- Score: 25.114619307838602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much research has been devoted to the problem of learning fair
representations; however, they do not explicitly the relationship between
latent representations. In many real-world applications, there may be causal
relationships between latent representations. Furthermore, most fair
representation learning methods focus on group-level fairness and are based on
correlations, ignoring the causal relationships underlying the data. In this
work, we theoretically demonstrate that using the structured representations
enable downstream predictive models to achieve counterfactual fairness, and
then we propose the Counterfactual Fairness Variational AutoEncoder (CF-VAE) to
obtain structured representations with respect to domain knowledge. The
experimental results show that the proposed method achieves better fairness and
accuracy performance than the benchmark fairness methods.
Related papers
- Rethinking Fair Representation Learning for Performance-Sensitive Tasks [19.40265690963578]
We use causal reasoning to define and formalise different sources of dataset bias.
We run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts.
arXiv Detail & Related papers (2024-10-05T11:01:16Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.