Identifying, measuring, and mitigating individual unfairness for
supervised learning models and application to credit risk models
- URL: http://arxiv.org/abs/2211.06106v1
- Date: Fri, 11 Nov 2022 10:20:46 GMT
- Title: Identifying, measuring, and mitigating individual unfairness for
supervised learning models and application to credit risk models
- Authors: Rasoul Shahsavarifar, Jithu Chandran, Mario Inchiosa, Amit Deshpande,
Mario Schlener, Vishal Gossain, Yara Elias, Vinaya Murali
- Abstract summary: We focus on identifying and mitigating individual unfairness in AI solutions.
We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness.
Some experimental results corresponding to the individual unfairness mitigation techniques are presented.
- Score: 3.818578543491318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the past few years, Artificial Intelligence (AI) has garnered attention
from various industries including financial services (FS). AI has made a
positive impact in financial services by enhancing productivity and improving
risk management. While AI can offer efficient solutions, it has the potential
to bring unintended consequences. One such consequence is the pronounced effect
of AI-related unfairness and attendant fairness-related harms. These
fairness-related harms could involve differential treatment of individuals; for
example, unfairly denying a loan to certain individuals or groups of
individuals. In this paper, we focus on identifying and mitigating individual
unfairness and leveraging some of the recently published techniques in this
domain, especially as applicable to the credit adjudication use case. We also
investigate the extent to which techniques for achieving individual fairness
are effective at achieving group fairness. Our main contribution in this work
is functionalizing a two-step training process which involves learning a fair
similarity metric from a group sense using a small portion of the raw data and
training an individually "fair" classifier using the rest of the data where the
sensitive features are excluded. The key characteristic of this two-step
technique is related to its flexibility, i.e., the fair metric obtained in the
first step can be used with any other individual fairness algorithms in the
second step. Furthermore, we developed a second metric (distinct from the fair
similarity metric) to determine how fairly a model is treating similar
individuals. We use this metric to compare a "fair" model against its baseline
model in terms of their individual fairness value. Finally, some experimental
results corresponding to the individual unfairness mitigation techniques are
presented.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.