Developing a novel fair-loan-predictor through a multi-sensitive
debiasing pipeline: DualFair
- URL: http://arxiv.org/abs/2110.08944v1
- Date: Sun, 17 Oct 2021 23:13:43 GMT
- Title: Developing a novel fair-loan-predictor through a multi-sensitive
debiasing pipeline: DualFair
- Authors: Arashdeep Singh, Jashandeep Singh, Ariba Khan, and Amar Gupta
- Abstract summary: We create a novel bias mitigation technique called DualFair and develop a new fairness metric (i.e. AWI) that can handle MSPSO.
We test our novel mitigation method using a comprehensive U.S mortgage lending dataset and show that our classifier, or fair loan predictor, obtains better fairness and accuracy metrics than current state-of-the-art models.
- Score: 2.149265948858581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models are increasingly used for high-stake
applications that can greatly impact people's lives. Despite their use, these
models have the potential to be biased towards certain social groups on the
basis of race, gender, or ethnicity. Many prior works have attempted to
mitigate this "model discrimination" by updating the training data
(pre-processing), altering the model learning process (in-processing), or
manipulating model output (post-processing). However, these works have not yet
been extended to the realm of multi-sensitive parameters and sensitive options
(MSPSO), where sensitive parameters are attributes that can be discriminated
against (e.g race) and sensitive options are options within sensitive
parameters (e.g black or white), thus giving them limited real-world usability.
Prior work in fairness has also suffered from an accuracy-fairness tradeoff
that prevents both the accuracy and fairness from being high. Moreover,
previous literature has failed to provide holistic fairness metrics that work
with MSPSO. In this paper, we solve all three of these problems by (a) creating
a novel bias mitigation technique called DualFair and (b) developing a new
fairness metric (i.e. AWI) that can handle MSPSO. Lastly, we test our novel
mitigation method using a comprehensive U.S mortgage lending dataset and show
that our classifier, or fair loan predictor, obtains better fairness and
accuracy metrics than current state-of-the-art models.
Related papers
- BMFT: Achieving Fairness via Bias-based Weight Masking Fine-tuning [17.857930204697983]
Bias-based Weight Masking Fine-Tuning (BMFT) is a novel post-processing method that enhances the fairness of a trained model in significantly fewer epochs.
BMFT produces a mask over model parameters, which efficiently identifies the weights contributing the most towards biased predictions.
Experiments across four dermatological datasets and two sensitive attributes demonstrate that BMFT outperforms existing state-of-the-art (SOTA) techniques in both diagnostic accuracy and fairness metrics.
arXiv Detail & Related papers (2024-08-13T13:36:48Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - A Differentiable Distance Approximation for Fairer Image Classification [31.471917430653626]
We propose a differentiable approximation of the variance of demographics, a metric that can be used to measure the bias, or unfairness, in an AI model.
Our approximation can be optimised alongside the regular training objective which eliminates the need for any extra models during training.
We demonstrate that our approach improves the fairness of AI models in varied task and dataset scenarios.
arXiv Detail & Related papers (2022-10-09T23:02:18Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.