Unbiased Decisions Reduce Regret: Adversarial Domain Adaptation for the
Bank Loan Problem
- URL: http://arxiv.org/abs/2308.08051v1
- Date: Tue, 15 Aug 2023 21:35:44 GMT
- Title: Unbiased Decisions Reduce Regret: Adversarial Domain Adaptation for the
Bank Loan Problem
- Authors: Elena Gal, Shaun Singh, Aldo Pacchiano, Ben Walker, Terry Lyons, Jakob
Foerster
- Abstract summary: We focus on a class of problems that share a common feature: the true label is only observed when a data point is assigned a positive label by the principal.
We introduce adversarial optimism (AdOpt) to directly address bias in the training set using adversarial domain adaptation.
- Score: 21.43618923706602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many real world settings binary classification decisions are made based on
limited data in near real-time, e.g. when assessing a loan application. We
focus on a class of these problems that share a common feature: the true label
is only observed when a data point is assigned a positive label by the
principal, e.g. we only find out whether an applicant defaults if we accepted
their loan application. As a consequence, the false rejections become
self-reinforcing and cause the labelled training set, that is being
continuously updated by the model decisions, to accumulate bias. Prior work
mitigates this effect by injecting optimism into the model, however this comes
at the cost of increased false acceptance rate. We introduce adversarial
optimism (AdOpt) to directly address bias in the training set using adversarial
domain adaptation. The goal of AdOpt is to learn an unbiased but informative
representation of past data, by reducing the distributional shift between the
set of accepted data points and all data points seen thus far. AdOpt
significantly exceeds state-of-the-art performance on a set of challenging
benchmark problems. Our experiments also provide initial evidence that the
introduction of adversarial domain adaptation improves fairness in this
setting.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fighting Sampling Bias: A Framework for Training and Evaluating Credit Scoring Models [2.918530881730374]
This paper addresses the adverse effect of sampling bias on model training and evaluation.
We propose bias-aware self-learning and a reject inference framework for scorecard evaluation.
Our results suggest a profit improvement of about eight percent, when using Bayesian evaluation to decide on acceptance rates.
arXiv Detail & Related papers (2024-07-17T20:59:54Z) - Inclusive FinTech Lending via Contrastive Learning and Domain Adaptation [9.75150920742607]
FinTech lending has played a significant role in facilitating financial inclusion.
There are concerns about the potentially biased algorithmic decision-making during loan screening.
We propose a new Transformer-based sequential loan screening model with self-supervised contrastive learning and domain adaptation.
arXiv Detail & Related papers (2023-05-10T01:11:35Z) - Data-Driven Offline Decision-Making via Invariant Representation
Learning [97.49309949598505]
offline data-driven decision-making involves synthesizing optimized decisions with no active interaction.
A key challenge is distributional shift: when we optimize with respect to the input into a model trained from offline data, it is easy to produce an out-of-distribution (OOD) input that appears erroneously good.
In this paper, we formulate offline data-driven decision-making as domain adaptation, where the goal is to make accurate predictions for the value of optimized decisions.
arXiv Detail & Related papers (2022-11-21T11:01:37Z) - RAGUEL: Recourse-Aware Group Unfairness Elimination [2.720659230102122]
'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes.
We introduce the notion of ranked group-level recourse fairness.
We develop a'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints.
arXiv Detail & Related papers (2022-08-30T11:53:38Z) - Mitigating Algorithmic Bias with Limited Annotations [65.060639928772]
When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
arXiv Detail & Related papers (2022-07-20T16:31:19Z) - Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision
Making [14.905698014932488]
We propose a novel method based on a variational autoencoder for practical fair decision-making.
Our method learns an unbiased data representation leveraging both labeled and unlabeled data.
Our method converges to the optimal (fair) policy according to the ground-truth with low variance.
arXiv Detail & Related papers (2022-05-10T10:33:11Z) - Generalizable Person Re-Identification via Self-Supervised Batch Norm
Test-Time Adaption [63.7424680360004]
Batch Norm Test-time Adaption (BNTA) is a novel re-id framework that applies the self-supervised strategy to update BN parameters adaptively.
BNTA explores the domain-aware information within unlabeled target data before inference, and accordingly modulates the feature distribution normalized by BN to adapt to the target domain.
arXiv Detail & Related papers (2022-03-01T18:46:32Z) - Bias-Tolerant Fair Classification [20.973916494320246]
label bias and selection bias are two reasons in data that will hinder the fairness of machine-learning outcomes.
We propose a Bias-TolerantFAirRegularizedLoss (B-FARL) which tries to regain the benefits using data affected by label bias and selection bias.
B-FARL takes the biased data as input, calls a model that approximates the one trained with fair but latent data, and thus prevents discrimination without constraints required.
arXiv Detail & Related papers (2021-07-07T13:31:38Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.