Learning Fair Representation via Distributional Contrastive
Disentanglement
- URL: http://arxiv.org/abs/2206.08743v1
- Date: Fri, 17 Jun 2022 12:58:58 GMT
- Title: Learning Fair Representation via Distributional Contrastive
Disentanglement
- Authors: Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi,
Kyungwoo Song
- Abstract summary: Learning fair representation is crucial for achieving fairness or debiasing sensitive information.
We propose a new approach, learning FAir Representation via distributional CONtrastive Variational AutoEncoder (FarconVAE)
We show superior performance on fairness, pretrained model debiasing, and domain generalization tasks from various modalities.
- Score: 9.577369164287813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning fair representation is crucial for achieving fairness or debiasing
sensitive information. Most existing works rely on adversarial representation
learning to inject some invariance into representation. However, adversarial
learning methods are known to suffer from relatively unstable training, and
this might harm the balance between fairness and predictiveness of
representation. We propose a new approach, learning FAir Representation via
distributional CONtrastive Variational AutoEncoder (FarconVAE), which induces
the latent space to be disentangled into sensitive and nonsensitive parts. We
first construct the pair of observations with different sensitive attributes
but with the same labels. Then, FarconVAE enforces each non-sensitive latent to
be closer, while sensitive latents to be far from each other and also far from
the non-sensitive latent by contrasting their distributions. We provide a new
type of contrastive loss motivated by Gaussian and Student-t kernels for
distributional contrastive learning with theoretical analysis. Besides, we
adopt a new swap-reconstruction loss to boost the disentanglement further.
FarconVAE shows superior performance on fairness, pretrained model debiasing,
and domain generalization tasks from various modalities, including tabular,
image, and text.
Related papers
- Synergistic Anchored Contrastive Pre-training for Few-Shot Relation
Extraction [4.7220779071424985]
Few-shot Relation Extraction (FSRE) aims to extract facts from a sparse set of labeled corpora.
Recent studies have shown promising results in FSRE by employing Pre-trained Language Models.
We introduce a novel synergistic anchored contrastive pre-training framework.
arXiv Detail & Related papers (2023-12-19T10:16:24Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Understanding Contrastive Learning Requires Incorporating Inductive
Biases [64.56006519908213]
Recent attempts to theoretically explain the success of contrastive learning on downstream tasks prove guarantees depending on properties of em augmentations and the value of em contrastive loss of representations.
We demonstrate that such analyses ignore em inductive biases of the function class and training algorithm, even em provably leading to vacuous guarantees in some settings.
arXiv Detail & Related papers (2022-02-28T18:59:20Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Why Do Self-Supervised Models Transfer? Investigating the Impact of
Invariance on Downstream Tasks [79.13089902898848]
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images.
We show that different tasks in computer vision require features to encode different (in)variances.
arXiv Detail & Related papers (2021-11-22T18:16:35Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - FairMixRep : Self-supervised Robust Representation Learning for
Heterogeneous Data with Fairness constraints [1.1661238776379117]
We address the problem of Mixed Space Fair Representation learning from an unsupervised perspective.
We learn a Universal representation that is timely, unique, and a novel research contribution.
arXiv Detail & Related papers (2020-10-07T07:23:02Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.