Fairness-Aware Learning with Prejudice Free Representations
- URL: http://arxiv.org/abs/2002.12143v1
- Date: Wed, 26 Feb 2020 10:06:31 GMT
- Title: Fairness-Aware Learning with Prejudice Free Representations
- Authors: Ramanujam Madhavan, Mohit Wadhwa
- Abstract summary: We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
- Score: 2.398608007786179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are extensively being used to make decisions that
have a significant impact on human life. These models are trained over
historical data that may contain information about sensitive attributes such as
race, sex, religion, etc. The presence of such sensitive attributes can impact
certain population subgroups unfairly. It is straightforward to remove
sensitive features from the data; however, a model could pick up prejudice from
latent sensitive attributes that may exist in the training data. This has led
to the growing apprehension about the fairness of the employed models. In this
paper, we propose a novel algorithm that can effectively identify and treat
latent discriminating features. The approach is agnostic of the learning
algorithm and generalizes well for classification as well as regression tasks.
It can also be used as a key aid in proving that the model is free of
discrimination towards regulatory compliance if the need arises. The approach
helps to collect discrimination-free features that would improve the model
performance while ensuring the fairness of the model. The experimental results
from our evaluations on publicly available real-world datasets show a
near-ideal fairness measurement in comparison to other methods.
Related papers
- Achieve Fairness without Demographics for Dermatological Disease
Diagnosis [17.792332189055223]
We propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training.
Inspired by prior work highlighting the impact of feature entanglement on fairness, we enhance the model features by capturing the features related to the sensitive and target attributes.
This ensures that the model can only classify based on the features related to the target attribute without relying on features associated with sensitive attributes.
arXiv Detail & Related papers (2024-01-16T02:49:52Z) - Fairness Under Demographic Scarce Regime [7.523105080786704]
We propose a framework to build attribute classifiers that achieve better fairness-accuracy tradeoffs.
We show that enforcing fairness constraints on samples with uncertain sensitive attributes can negatively impact the fairness-accuracy tradeoff.
Our framework can outperform models trained with fairness constraints on the true sensitive attributes in most benchmarks.
arXiv Detail & Related papers (2023-07-24T19:07:34Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access [12.447577504758485]
We propose a framework to train fair classifiers without access to sensitive attributes on either training or validation data.
We show theoretically and empirically that these proxy labels can be used to maximize fairness under average accuracy constraints.
arXiv Detail & Related papers (2023-02-02T19:45:50Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Learning Fair Models without Sensitive Attributes: A Generative Approach [33.196044483534784]
We study a novel problem of learning fair models without sensitive attributes by exploring relevant features.
We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data.
Experimental results on real-world datasets show the effectiveness of our framework.
arXiv Detail & Related papers (2022-03-30T15:54:30Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.