Societal biases reinforcement through machine learning: A credit scoring
perspective
- URL: http://arxiv.org/abs/2006.08350v2
- Date: Sat, 31 Oct 2020 12:48:35 GMT
- Title: Societal biases reinforcement through machine learning: A credit scoring
perspective
- Authors: Bertrand K. Hassani
- Abstract summary: This paper aims to analyse whether machine learning and AI ensure that social biases thrive.
In this paper, we analyse how social biases are transmitted from the data into banks loan approvals by predicting either the gender or the ethnicity of the customers.
- Score: 38.437384481171804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Does machine learning and AI ensure that social biases thrive ? This paper
aims to analyse this issue. Indeed, as algorithms are informed by data, if
these are corrupted, from a social bias perspective, good machine learning
algorithms would learn from the data provided and reverberate the patterns
learnt on the predictions related to either the classification or the
regression intended. In other words, the way society behaves whether positively
or negatively, would necessarily be reflected by the models. In this paper, we
analyse how social biases are transmitted from the data into banks loan
approvals by predicting either the gender or the ethnicity of the customers
using the exact same information provided by customers through their
applications.
Related papers
- Fair Generalized Linear Mixed Models [0.0]
Fairness in machine learning aims to ensure that biases in the data and model inaccuracies do not lead to discriminatory decisions.
We present an algorithm that can handle both problems simultaneously.
arXiv Detail & Related papers (2024-05-15T11:42:41Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Investigating Bias with a Synthetic Data Generator: Empirical Evidence
and Philosophical Interpretation [66.64736150040093]
Machine learning applications are becoming increasingly pervasive in our society.
Risk is that they will systematically spread the bias embedded in data.
We propose to analyze biases by introducing a framework for generating synthetic data with specific types of bias and their combinations.
arXiv Detail & Related papers (2022-09-13T11:18:50Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Representative & Fair Synthetic Data [68.8204255655161]
We present a framework to incorporate fairness constraints into the self-supervised learning process.
We generate a representative as well as fair version of the UCI Adult census data set.
We consider representative & fair synthetic data a promising future building block to teach algorithms not on historic worlds, but rather on the worlds that we strive to live in.
arXiv Detail & Related papers (2021-04-07T09:19:46Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Mitigating Gender Bias in Machine Learning Data Sets [5.075506385456811]
Gender bias has been identified in the context of employment advertising and recruitment tools.
This paper proposes a framework for the identification of gender bias in training data for machine learning.
arXiv Detail & Related papers (2020-05-14T12:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.