Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss
- URL: http://arxiv.org/abs/2305.19671v1
- Date: Wed, 31 May 2023 09:09:59 GMT
- Title: Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss
- Authors: Moritz Vandenhirtz, Laura Manduchi, Ri\v{c}ards Marcinkevi\v{c}s and
Julia E. Vogt
- Abstract summary: neural networks are notorious for learning unwanted associations, also known as biases, instead of the underlying decision rule.
We propose Signal is Harder, a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier.
We propose a perturbation scheme in the latent space for visualizing the bias that helps practitioners become aware of the sources of spurious correlations.
- Score: 10.031357641396616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spurious correlations are everywhere. While humans often do not perceive
them, neural networks are notorious for learning unwanted associations, also
known as biases, instead of the underlying decision rule. As a result,
practitioners are often unaware of the biased decision-making of their
classifiers. Such a biased model based on spurious correlations might not
generalize to unobserved data, leading to unintended, adverse consequences. We
propose Signal is Harder (SiH), a variational-autoencoder-based method that
simultaneously trains a biased and unbiased classifier using a novel,
disentangling reweighting scheme inspired by the focal loss. Using the unbiased
classifier, SiH matches or improves upon the performance of state-of-the-art
debiasing methods. To improve the interpretability of our technique, we propose
a perturbation scheme in the latent space for visualizing the bias that helps
practitioners become aware of the sources of spurious correlations.
Related papers
- Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Learning Debiased Classifier with Biased Committee [30.417623580157834]
Neural networks are prone to be biased towards spurious correlations between classes and latent attributes exhibited in a major portion of training data.
We propose a new method for training debiased classifiers with no spurious attribute label.
On five real-world datasets, our method outperforms prior arts using no spurious attribute label like ours and even surpasses those relying on bias labels occasionally.
arXiv Detail & Related papers (2022-06-22T04:50:28Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Learning Debiased Models with Dynamic Gradient Alignment and
Bias-conflicting Sample Mining [39.00256193731365]
Deep neural networks notoriously suffer from dataset biases which are detrimental to model robustness, generalization and fairness.
We propose a two-stage debiasing scheme to combat against the intractable unknown biases.
arXiv Detail & Related papers (2021-11-25T14:50:10Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z) - Towards Measuring Bias in Image Classification [61.802949761385]
Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
arXiv Detail & Related papers (2021-07-01T10:50:39Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.