Fairness in Credit Scoring: Assessment, Implementation and Profit
Implications
- URL: http://arxiv.org/abs/2103.01907v1
- Date: Tue, 2 Mar 2021 18:06:44 GMT
- Title: Fairness in Credit Scoring: Assessment, Implementation and Profit
Implications
- Authors: Nikita Kozodoi, Johannes Jacob, Stefan Lessmann
- Abstract summary: We show that algorithmic discrimination can be reduced to a reasonable level at a relatively low cost.
We find that multiple fairness criteria can be approximately satisfied at once and identify separation as a proper criterion for measuring the fairness of a scorecard.
- Score: 4.19608893667939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of algorithmic decision-making has spawned much research on fair
machine learning (ML). Financial institutions use ML for building risk
scorecards that support a range of credit-related decisions. Yet, the
literature on fair ML in credit scoring is scarce. The paper makes two
contributions. First, we provide a systematic overview of algorithmic options
for incorporating fairness goals in the ML model development pipeline. In this
scope, we also consolidate the space of statistical fairness criteria and
examine their adequacy for credit scoring. Second, we perform an empirical
study of different fairness processors in a profit-oriented credit scoring
setup using seven real-world data sets. The empirical results substantiate the
evaluation of fairness measures, identify more and less suitable options to
implement fair credit scoring, and clarify the profit-fairness trade-off in
lending decisions. Specifically, we find that multiple fairness criteria can be
approximately satisfied at once and identify separation as a proper criterion
for measuring the fairness of a scorecard. We also find fair in-processors to
deliver a good balance between profit and fairness. More generally, we show
that algorithmic discrimination can be reduced to a reasonable level at a
relatively low cost.
Related papers
- Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds [8.471466670802817]
We study the tradeoff between fairness and accuracy under the statistical notion of equalized odds.
We present a new upper bound on the accuracy as a function of the fairness budget.
Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups.
arXiv Detail & Related papers (2024-05-12T23:15:21Z) - A Distributionally Robust Optimisation Approach to Fair Credit Scoring [2.8851756275902467]
Credit scoring has been catalogued by the European Commission and the Executive Office of the US President as a high-risk classification task.
To address this concern, recent credit scoring research has considered a range of fairness-enhancing techniques.
arXiv Detail & Related papers (2024-02-02T11:43:59Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Empowering Many, Biasing a Few: Generalist Credit Scoring through Large
Language Models [53.620827459684094]
Large Language Models (LLMs) have great potential for credit scoring tasks, with strong generalization ability across multiple tasks.
We propose the first open-source comprehensive framework for exploring LLMs for credit scoring.
We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.
arXiv Detail & Related papers (2023-10-01T03:50:34Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - The Fairness of Credit Scoring Models [0.0]
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers.
This can be unintentional and originate from the training dataset or from the model itself.
We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness.
arXiv Detail & Related papers (2022-05-20T14:20:40Z) - Making ML models fairer through explanations: the case of LimeOut [7.952582509792971]
Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased.
This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole.
We show how the simple idea of "feature dropout" followed by an "ensemble approach" can improve model fairness.
arXiv Detail & Related papers (2020-11-01T19:07:11Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.