SoFaiR: Single Shot Fair Representation Learning
- URL: http://arxiv.org/abs/2204.12556v1
- Date: Tue, 26 Apr 2022 19:31:30 GMT
- Title: SoFaiR: Single Shot Fair Representation Learning
- Authors: Xavier Gitiaux and Huzefa Rangwala
- Abstract summary: SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
- Score: 24.305894478899948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To avoid discriminatory uses of their data, organizations can learn to map
them into a representation that filters out information related to sensitive
attributes. However, all existing methods in fair representation learning
generate a fairness-information trade-off. To achieve different points on the
fairness-information plane, one must train different models. In this paper, we
first demonstrate that fairness-information trade-offs are fully characterized
by rate-distortion trade-offs. Then, we use this key result and propose SoFaiR,
a single shot fair representation learning method that generates with one
trained model many points on the fairness-information plane. Besides its
computational saving, our single-shot approach is, to the extent of our
knowledge, the first fair representation learning method that explains what
information is affected by changes in the fairness / distortion properties of
the representation. Empirically, we find on three datasets that SoFaiR achieves
similar fairness-information trade-offs as its multi-shot counterparts.
Related papers
- DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Latent Space Smoothing for Individually Fair Representations [12.739528232133495]
We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
arXiv Detail & Related papers (2021-11-26T18:22:42Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - README: REpresentation learning by fairness-Aware Disentangling MEthod [23.93330434580667]
We design Fairness-aware Disentangling Variational AutoEncoder (FD-VAE) for fair representation learning.
This network disentangles latent space into three subspaces with a decorrelation loss that encourages each subspace to contain independent information.
After the representation learning, this disentangled representation is leveraged for fairer downstream classification by excluding the subspace with the protected attribute information.
arXiv Detail & Related papers (2020-07-07T20:16:49Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.