README: REpresentation learning by fairness-Aware Disentangling MEthod
- URL: http://arxiv.org/abs/2007.03775v1
- Date: Tue, 7 Jul 2020 20:16:49 GMT
- Title: README: REpresentation learning by fairness-Aware Disentangling MEthod
- Authors: Sungho Park, Dohyung Kim, Sunhee Hwang, Hyeran Byun
- Abstract summary: We design Fairness-aware Disentangling Variational AutoEncoder (FD-VAE) for fair representation learning.
This network disentangles latent space into three subspaces with a decorrelation loss that encourages each subspace to contain independent information.
After the representation learning, this disentangled representation is leveraged for fairer downstream classification by excluding the subspace with the protected attribute information.
- Score: 23.93330434580667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair representation learning aims to encode invariant representation with
respect to the protected attribute, such as gender or age. In this paper, we
design Fairness-aware Disentangling Variational AutoEncoder (FD-VAE) for fair
representation learning. This network disentangles latent space into three
subspaces with a decorrelation loss that encourages each subspace to contain
independent information: 1) target attribute information, 2) protected
attribute information, 3) mutual attribute information. After the
representation learning, this disentangled representation is leveraged for
fairer downstream classification by excluding the subspace with the protected
attribute information. We demonstrate the effectiveness of our model through
extensive experiments on CelebA and UTK Face datasets. Our method outperforms
the previous state-of-the-art method by large margins in terms of equal
opportunity and equalized odds.
Related papers
- Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation [13.841888171417017]
Conditional independence (CI) constraints are critical for defining and evaluating fairness in machine learning.
We introduce a new training paradigm that can be applied to any encoder architecture.
arXiv Detail & Related papers (2024-04-21T23:34:45Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - SoFaiR: Single Shot Fair Representation Learning [24.305894478899948]
SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
arXiv Detail & Related papers (2022-04-26T19:31:30Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Learning Fair Representations via Rate-Distortion Maximization [16.985698188471016]
We present Fairness-aware Rate Maximization (FaRM), that removes demographic information by making representations of instances belonging to the same protected attribute class uncorrelated using the rate-distortion function.
FaRM achieves state-of-the-art performance on several datasets, and learned representations leak significantly less protected attribute information against an attack by a non-linear probing network.
arXiv Detail & Related papers (2022-01-31T19:00:52Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection [51.041763676948705]
Iterative Null-space Projection (INLP) is a novel method for removing information from neural representations.
We show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.
arXiv Detail & Related papers (2020-04-16T14:02:50Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.