Uncovering the Source of Machine Bias
- URL: http://arxiv.org/abs/2201.03092v1
- Date: Sun, 9 Jan 2022 21:05:24 GMT
- Title: Uncovering the Source of Machine Bias
- Authors: Xiyang Hu, Yan Huang, Beibei Li, Tian Lu
- Abstract summary: We find two types of biases in gender, preference-based bias and belief-based bias, are present in human evaluators' decisions.
We quantify the effect of gender bias on loan granting outcomes and the welfare of the company and the borrowers.
We find that machine learning algorithms can mitigate both the preference-based bias and the belief-based bias.
- Score: 9.75150920742607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a structural econometric model to capture the decision dynamics of
human evaluators on an online micro-lending platform, and estimate the model
parameters using a real-world dataset. We find two types of biases in gender,
preference-based bias and belief-based bias, are present in human evaluators'
decisions. Both types of biases are in favor of female applicants. Through
counterfactual simulations, we quantify the effect of gender bias on loan
granting outcomes and the welfare of the company and the borrowers. Our results
imply that both the existence of the preference-based bias and that of the
belief-based bias reduce the company's profits. When the preference-based bias
is removed, the company earns more profits. When the belief-based bias is
removed, the company's profits also increase. Both increases result from
raising the approval probability for borrowers, especially male borrowers, who
eventually pay back loans. For borrowers, the elimination of either bias
decreases the gender gap of the true positive rates in the credit risk
evaluation. We also train machine learning algorithms on both the real-world
data and the data from the counterfactual simulations. We compare the decisions
made by those algorithms to see how evaluators' biases are inherited by the
algorithms and reflected in machine-based decisions. We find that machine
learning algorithms can mitigate both the preference-based bias and the
belief-based bias.
Related papers
- Less can be more: representational vs. stereotypical gender bias in facial expression recognition [3.9698529891342207]
Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions.
This paper investigates the propagation of demographic biases from datasets into machine learning models.
We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical.
arXiv Detail & Related papers (2024-06-25T09:26:49Z) - De-Biasing Models of Biased Decisions: A Comparison of Methods Using Mortgage Application Data [0.0]
This paper adds counterfactual (simulated) ethnic bias to real data on mortgage application decisions.
It shows that this bias is replicated by a machine learning model (XGBoost) even when ethnicity is not used as a predictive variable.
arXiv Detail & Related papers (2024-05-01T23:46:44Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.