A Distributionally Robust Optimisation Approach to Fair Credit Scoring
- URL: http://arxiv.org/abs/2402.01811v1
- Date: Fri, 2 Feb 2024 11:43:59 GMT
- Title: A Distributionally Robust Optimisation Approach to Fair Credit Scoring
- Authors: Pablo Casas, Christophe Mues, Huan Yu
- Abstract summary: Credit scoring has been catalogued by the European Commission and the Executive Office of the US President as a high-risk classification task.
To address this concern, recent credit scoring research has considered a range of fairness-enhancing techniques.
- Score: 2.8851756275902467
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Credit scoring has been catalogued by the European Commission and the
Executive Office of the US President as a high-risk classification task, a key
concern being the potential harms of making loan approval decisions based on
models that would be biased against certain groups. To address this concern,
recent credit scoring research has considered a range of fairness-enhancing
techniques put forward by the machine learning community to reduce bias and
unfair treatment in classification systems. While the definition of fairness or
the approach they follow to impose it may vary, most of these techniques,
however, disregard the robustness of the results. This can create situations
where unfair treatment is effectively corrected in the training set, but when
producing out-of-sample classifications, unfair treatment is incurred again.
Instead, in this paper, we will investigate how to apply Distributionally
Robust Optimisation (DRO) methods to credit scoring, thereby empirically
evaluating how they perform in terms of fairness, ability to classify
correctly, and the robustness of the solution against changes in the marginal
proportions. In so doing, we find DRO methods to provide a substantial
improvement in terms of fairness, with almost no loss in performance. These
results thus indicate that DRO can improve fairness in credit scoring, provided
that further advances are made in efficiently implementing these systems. In
addition, our analysis suggests that many of the commonly used fairness metrics
are unsuitable for a credit scoring setting, as they depend on the choice of
classification threshold.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Inference-time Stochastic Ranking with Risk Control [19.20938164194589]
Learning to Rank methods are vital in online economies, affecting users and item providers.
We propose a novel method that performs ranking at inference time with guanranteed utility or fairness given pretrained scoring functions.
arXiv Detail & Related papers (2023-06-12T15:44:58Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - RAGUEL: Recourse-Aware Group Unfairness Elimination [2.720659230102122]
'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes.
We introduce the notion of ranked group-level recourse fairness.
We develop a'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints.
arXiv Detail & Related papers (2022-08-30T11:53:38Z) - The Fairness of Credit Scoring Models [0.0]
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers.
This can be unintentional and originate from the training dataset or from the model itself.
We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness.
arXiv Detail & Related papers (2022-05-20T14:20:40Z) - Fairness in Credit Scoring: Assessment, Implementation and Profit
Implications [4.19608893667939]
We show that algorithmic discrimination can be reduced to a reasonable level at a relatively low cost.
We find that multiple fairness criteria can be approximately satisfied at once and identify separation as a proper criterion for measuring the fairness of a scorecard.
arXiv Detail & Related papers (2021-03-02T18:06:44Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - On Positive-Unlabeled Classification in GAN [130.43248168149432]
This paper defines a positive and unlabeled classification problem for standard GANs.
It then leads to a novel technique to stabilize the training of the discriminator in GANs.
arXiv Detail & Related papers (2020-02-04T05:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.