Learning from Failure: Training Debiased Classifier from Biased
Classifier
- URL: http://arxiv.org/abs/2007.02561v2
- Date: Mon, 23 Nov 2020 07:41:57 GMT
- Title: Learning from Failure: Training Debiased Classifier from Biased
Classifier
- Authors: Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin
- Abstract summary: We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
- Score: 76.52804102765931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks often learn to make predictions that overly rely on spurious
correlation existing in the dataset, which causes the model to be biased. While
previous work tackles this issue by using explicit labeling on the spuriously
correlated attributes or presuming a particular bias type, we instead utilize a
cheaper, yet generic form of human knowledge, which can be widely applicable to
various types of bias. We first observe that neural networks learn to rely on
the spurious correlation only when it is "easier" to learn than the desired
knowledge, and such reliance is most prominent during the early phase of
training. Based on the observations, we propose a failure-based debiasing
scheme by training a pair of neural networks simultaneously. Our main idea is
twofold; (a) we intentionally train the first network to be biased by
repeatedly amplifying its "prejudice", and (b) we debias the training of the
second network by focusing on samples that go against the prejudice of the
biased network in (a). Extensive experiments demonstrate that our method
significantly improves the training of the network against various types of
biases in both synthetic and real-world datasets. Surprisingly, our framework
even occasionally outperforms the debiasing methods requiring explicit
supervision of the spuriously correlated attributes.
Related papers
- Model Debiasing by Learnable Data Augmentation [19.625915578646758]
This paper proposes a novel 2-stage learning pipeline featuring a data augmentation strategy able to regularize the training.
Experiments on synthetic and realistic biased datasets show state-of-the-art classification accuracy, outperforming competing methods.
arXiv Detail & Related papers (2024-08-09T09:19:59Z) - Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss [10.031357641396616]
neural networks are notorious for learning unwanted associations, also known as biases, instead of the underlying decision rule.
We propose Signal is Harder, a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier.
We propose a perturbation scheme in the latent space for visualizing the bias that helps practitioners become aware of the sources of spurious correlations.
arXiv Detail & Related papers (2023-05-31T09:09:59Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.