Adversarial Stacked Auto-Encoders for Fair Representation Learning
- URL: http://arxiv.org/abs/2107.12826v1
- Date: Tue, 27 Jul 2021 13:49:18 GMT
- Title: Adversarial Stacked Auto-Encoders for Fair Representation Learning
- Authors: Patrik Joslin Kenfack, Adil Mehmood Khan, Rasheed Hussain, S.M. Ahsan
Kazmi,
- Abstract summary: We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
- Score: 1.061960673667643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training machine learning models with the only accuracy as a final goal may
promote prejudices and discriminatory behaviors embedded in the data. One
solution is to learn latent representations that fulfill specific fairness
metrics. Different types of learning methods are employed to map data into the
fair representational space. The main purpose is to learn a latent
representation of data that scores well on a fairness metric while maintaining
the usability for the downstream task. In this paper, we propose a new fair
representation learning approach that leverages different levels of
representation of data to tighten the fairness bounds of the learned
representation. Our results show that stacking different auto-encoders and
enforcing fairness at different latent spaces result in an improvement of
fairness compared to other existing approaches.
Related papers
- Debiasing Graph Representation Learning based on Information Bottleneck [18.35405511009332]
We present the design and implementation of GRAFair, a new framework based on a variational graph auto-encoder.
The crux of GRAFair is the Conditional Fairness Bottleneck, where the objective is to capture the trade-off between the utility of representations and sensitive information of interest.
Experiments on various real-world datasets demonstrate the effectiveness of our proposed method in terms of fairness, utility, robustness, and stability.
arXiv Detail & Related papers (2024-09-02T16:45:23Z) - A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - SoFaiR: Single Shot Fair Representation Learning [24.305894478899948]
SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
arXiv Detail & Related papers (2022-04-26T19:31:30Z) - Latent Space Smoothing for Individually Fair Representations [12.739528232133495]
We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
arXiv Detail & Related papers (2021-11-26T18:22:42Z) - Impossibility results for fair representations [12.483260526189447]
We argue that no representation can guarantee the fairness of classifiers for different tasks trained using it.
More refined notions of fairness, like Odds Equality, cannot be guaranteed by a representation that does not take into account the task specific labeling rule.
arXiv Detail & Related papers (2021-07-07T21:12:55Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.