Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder
- URL: http://arxiv.org/abs/2204.00536v1
- Date: Fri, 1 Apr 2022 15:57:47 GMT
- Title: Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder
- Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
- Abstract summary: We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
- Score: 92.67156911466397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial learning is a widely used technique in fair representation
learning to remove the biases on sensitive attributes from data
representations. It usually requires to incorporate the sensitive attribute
labels as prediction targets. However, in many scenarios the sensitive
attribute labels of many samples can be unknown, and it is difficult to train a
strong discriminator based on the scarce data with observed attribute labels,
which may lead to generate unfair representations. In this paper, we propose a
semi-supervised fair representation learning approach based on adversarial
variational autoencoder, which can reduce the dependency of adversarial fair
models on data with labeled sensitive attributes. More specifically, we use a
bias-aware model to capture inherent bias information on sensitive attribute by
accurately predicting sensitive attributes from input data, and we use a
bias-free model to learn debiased fair representations by using adversarial
learning to remove bias information from them. The hidden representations
learned by the two models are regularized to be orthogonal. In addition, the
soft labels predicted by the two models are further integrated into a
semi-supervised variational autoencoder to reconstruct the input data, and we
apply an additional entropy regularization to encourage the attribute labels
inferred from the bias-free model to be high-entropy. In this way, the
bias-aware model can better capture attribute information while the bias-free
model is less discriminative on sensitive attributes if the input data is well
reconstructed. Extensive experiments on two datasets for different tasks
validate that our approach can achieve good representation learning fairness
under limited data with sensitive attribute labels.
Related papers
- Leveraging vision-language models for fair facial attribute classification [19.93324644519412]
General-purpose vision-language model (VLM) is a rich knowledge source for common sensitive attributes.
We analyze the correspondence between VLM predicted and human defined sensitive attribute distribution.
Experiments on multiple benchmark facial attribute classification datasets show fairness gains of the model over existing unsupervised baselines.
arXiv Detail & Related papers (2024-03-15T18:37:15Z) - Practical Bias Mitigation through Proxy Sensitive Attribute Label
Generation [0.0]
We propose a two-stage approach of unsupervised embedding generation followed by clustering to obtain proxy-sensitive labels.
The efficacy of our work relies on the assumption that bias propagates through non-sensitive attributes that are correlated to the sensitive attributes.
Experimental results demonstrate that bias mitigation using existing algorithms such as Fair Mixup and Adversarial Debiasing yields comparable results on derived proxy labels.
arXiv Detail & Related papers (2023-12-26T10:54:15Z) - Vision-language Assisted Attribute Learning [53.60196963381315]
Attribute labeling at large scale is typically incomplete and partial.
Existing attribute learning methods often treat the missing labels as negative or simply ignore them all during training.
We leverage the available vision-language knowledge to explicitly disclose the missing labels for enhancing model learning.
arXiv Detail & Related papers (2023-12-12T06:45:19Z) - Towards Assumption-free Bias Mitigation [47.5131072745805]
We propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors.
arXiv Detail & Related papers (2023-07-09T05:55:25Z) - Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access [12.447577504758485]
We propose a framework to train fair classifiers without access to sensitive attributes on either training or validation data.
We show theoretically and empirically that these proxy labels can be used to maximize fairness under average accuracy constraints.
arXiv Detail & Related papers (2023-02-02T19:45:50Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.