Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity
- URL: http://arxiv.org/abs/2308.02680v1
- Date: Tue, 1 Aug 2023 10:34:26 GMT
- Title: Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity
- Authors: Savina Kim and Stefan Lessmann and Galina Andreeva and Michael
Rovatsos
- Abstract summary: The authors demonstrate the impact of such algorithmic bias in the microfinance context.
We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm.
- Score: 5.333582981327497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing usage of new data sources and machine learning (ML) technology
in credit modeling raises concerns with regards to potentially unfair
decision-making that rely on protected characteristics (e.g., race, sex, age)
or other socio-economic and demographic data. The authors demonstrate the
impact of such algorithmic bias in the microfinance context. Difficulties in
assessing credit are disproportionately experienced among vulnerable groups,
however, very little is known about inequities in credit allocation between
groups defined, not only by single, but by multiple and intersecting social
categories. Drawing from the intersectionality paradigm, the study examines
intersectional horizontal inequities in credit access by gender, age, marital
status, single parent status and number of children. This paper utilizes data
from the Spanish microfinance market as its context to demonstrate how
pluralistic realities and intersectional identities can shape patterns of
credit allocation when using automated decision-making systems. With ML
technology being oblivious to societal good or bad, we find that a more
thorough examination of intersectionality can enhance the algorithmic fairness
lens to more authentically empower action for equitable outcomes and present a
fairer path forward. We demonstrate that while on a high-level, fairness may
exist superficially, unfairness can exacerbate at lower levels given
combinatorial effects; in other words, the core fairness problem may be more
complicated than current literature demonstrates. We find that in addition to
legally protected characteristics, sensitive attributes such as single parent
status and number of children can result in imbalanced harm. We discuss the
implications of these findings for the financial services industry.
Related papers
- GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic
Fairness Research with U.S. Fair Lending Regulation [27.517669481719388]
Credit is an essential component of financial wellbeing in America.
Machine learning algorithms are increasingly being used to determine access to credit.
Research has shown that machine learning can encode many different versions of "unfairness"
arXiv Detail & Related papers (2022-10-05T19:23:29Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Causal Multi-Level Fairness [4.937180141196767]
We formalize the problem of multi-level fairness using tools from causal inference.
We show importance of the problem by illustrating residual unfairness if macro-level sensitive attributes are not accounted for.
arXiv Detail & Related papers (2020-10-14T18:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.