Towards Assumption-free Bias Mitigation
- URL: http://arxiv.org/abs/2307.04105v1
- Date: Sun, 9 Jul 2023 05:55:25 GMT
- Title: Towards Assumption-free Bias Mitigation
- Authors: Chia-Yuan Chang, Yu-Neng Chuang, Kwei-Herng Lai, Xiaotian Han, Xia Hu,
Na Zou
- Abstract summary: We propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors.
- Score: 47.5131072745805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the impressive prediction ability, machine learning models show
discrimination towards certain demographics and suffer from unfair prediction
behaviors. To alleviate the discrimination, extensive studies focus on
eliminating the unequal distribution of sensitive attributes via multiple
approaches. However, due to privacy concerns, sensitive attributes are often
either unavailable or missing in real-world scenarios. Therefore, several
existing works alleviate the bias without sensitive attributes. Those studies
face challenges, either in inaccurate predictions of sensitive attributes or
the need to mitigate unequal distribution of manually defined non-sensitive
attributes related to bias. The latter requires strong assumptions about the
correlation between sensitive and non-sensitive attributes. As data
distribution and task goals vary, the strong assumption on non-sensitive
attributes may not be valid and require domain expertise. In this work, we
propose an assumption-free framework to detect the related attributes
automatically by modeling feature interaction for bias mitigation. The proposed
framework aims to mitigate the unfair impact of identified biased feature
interactions. Experimental results on four real-world datasets demonstrate that
our proposed framework can significantly alleviate unfair prediction behaviors
by considering biased feature interactions.
Related papers
- Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - A statistical approach to detect sensitive features in a group fairness
setting [10.087372021356751]
We propose a preprocessing step to address the task of automatically recognizing sensitive features that does not require a trained model to verify unfair results.
Our empirical results attest our hypothesis and show that several features considered as sensitive in the literature do not necessarily entail disparate (unfair) results.
arXiv Detail & Related papers (2023-05-11T17:30:12Z) - Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness
under Unawareness setting [6.004889078682389]
Current AI regulations require discarding sensitive features in the algorithm's decision-making process to prevent unfair outcomes.
We propose a way to reveal the potential hidden bias of a machine learning model that can persist even when sensitive features are discarded.
arXiv Detail & Related papers (2023-02-16T10:36:18Z) - Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access [12.447577504758485]
We propose a framework to train fair classifiers without access to sensitive attributes on either training or validation data.
We show theoretically and empirically that these proxy labels can be used to maximize fairness under average accuracy constraints.
arXiv Detail & Related papers (2023-02-02T19:45:50Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.