Fairness without the sensitive attribute via Causal Variational
Autoencoder
- URL: http://arxiv.org/abs/2109.04999v1
- Date: Fri, 10 Sep 2021 17:12:52 GMT
- Title: Fairness without the sensitive attribute via Causal Variational
Autoencoder
- Authors: Vincent Grari, Sylvain Lamprier, Marcin Detyniecki
- Abstract summary: Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
- Score: 17.675997789073907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, most fairness strategies in machine learning models focus on
mitigating unwanted biases by assuming that the sensitive information is
observed. However this is not always possible in practice. Due to privacy
purposes and var-ious regulations such as RGPD in EU, many personal sensitive
attributes are frequently not collected. We notice a lack of approaches for
mitigating bias in such difficult settings, in particular for achieving
classical fairness objectives such as Demographic Parity and Equalized Odds. By
leveraging recent developments for approximate inference, we propose an
approach to fill this gap. Based on a causal graph, we rely on a new
variational auto-encoding based framework named SRCVAE to infer a sensitive
information proxy, that serve for bias mitigation in an adversarial fairness
approach. We empirically demonstrate significant improvements over existing
works in the field. We observe that the generated proxy's latent space recovers
sensitive information and that our approach achieves a higher accuracy while
obtaining the same level of fairness on two real datasets, as measured using
com-mon fairness definitions.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement [33.565252991113766]
Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection.
Current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups.
We devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND.
Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-06-03T04:48:45Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Geospatial Disparities: A Case Study on Real Estate Prices in Paris [0.3495246564946556]
We propose a toolkit for identifying and mitigating biases arising from geospatial data.
We incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus.
Illustrating our methodology, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.
arXiv Detail & Related papers (2024-01-29T14:53:14Z) - Fairness Under Demographic Scarce Regime [7.523105080786704]
We propose a framework to build attribute classifiers that achieve better fairness-accuracy tradeoffs.
We show that enforcing fairness constraints on samples with uncertain sensitive attributes can negatively impact the fairness-accuracy tradeoff.
Our framework can outperform models trained with fairness constraints on the true sensitive attributes in most benchmarks.
arXiv Detail & Related papers (2023-07-24T19:07:34Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Model Debiasing via Gradient-based Explanation on Representation [14.673988027271388]
We propose a novel fairness framework that performs debiasing with regard to sensitive attributes and proxy attributes.
Our framework achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-20T11:57:57Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.