SelecMix: Debiased Learning by Contradicting-pair Sampling
- URL: http://arxiv.org/abs/2211.02291v1
- Date: Fri, 4 Nov 2022 07:15:36 GMT
- Title: SelecMix: Debiased Learning by Contradicting-pair Sampling
- Authors: Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney,
Jin-Hwa Kim, Byoung-Tak Zhang
- Abstract summary: Neural networks trained with ERM learn unintended decision rules when their training data is biased.
We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples.
Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features.
- Score: 39.613595678105845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks trained with ERM (empirical risk minimization) sometimes
learn unintended decision rules, in particular when their training data is
biased, i.e., when training labels are strongly correlated with undesirable
features. To prevent a network from learning such features, recent methods
augment training data such that examples displaying spurious correlations
(i.e., bias-aligned examples) become a minority, whereas the other,
bias-conflicting examples become prevalent. However, these approaches are
sometimes difficult to train and scale to real-world data because they rely on
generative models or disentangled representations. We propose an alternative
based on mixup, a popular augmentation that creates convex combinations of
training examples. Our method, coined SelecMix, applies mixup to contradicting
pairs of examples, defined as showing either (i) the same label but dissimilar
biased features, or (ii) different labels but similar biased features.
Identifying such pairs requires comparing examples with respect to unknown
biased features. For this, we utilize an auxiliary contrastive model with the
popular heuristic that biased features are learned preferentially during
training. Experiments on standard benchmarks demonstrate the effectiveness of
the method, in particular when label noise complicates the identification of
bias-conflicting examples.
Related papers
- Debiased Sample Selection for Combating Noisy Labels [24.296451733127956]
We propose a noIse-Tolerant Expert Model (ITEM) for debiased learning in sample selection.
Specifically, to mitigate the training bias, we design a robust network architecture that integrates with multiple experts.
By training on the mixture of two class-discriminative mini-batches, the model mitigates the effect of the imbalanced training set.
arXiv Detail & Related papers (2024-01-24T10:37:28Z) - Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias [5.698050337128548]
Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples.
For neural networks, softmax prediction probabilities are often used as a confidence measure, although they are known to be overconfident, even for wrong predictions.
We propose a novel confidence measure, called $mathcalT$-similarity, built upon the prediction diversity of an ensemble of linear classifiers.
arXiv Detail & Related papers (2023-10-23T11:30:06Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo
Chamber [17.034228910493056]
This paper presents experimental analyses revealing that the existing biased models overfit to bias-conflicting samples in the training data.
We propose a straightforward and effective method called Echoes, which trains a biased model and a target model with a different strategy.
Our approach achieves superior debiasing results compared to the existing baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-06T13:13:18Z) - An Exploration of How Training Set Composition Bias in Machine Learning
Affects Identifying Rare Objects [0.0]
It is common to up-weight the examples of the rare class to ensure it isn't ignored.
It is also a frequent practice to train on restricted data where the balance of source types is closer to equal.
Here we show that these practices can bias the model toward over-assigning sources to the rare class.
arXiv Detail & Related papers (2022-07-07T10:26:55Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - Learning Debiased Representation via Disentangled Feature Augmentation [19.348340314001756]
This paper presents an empirical analysis revealing that training with "diverse" bias-conflicting samples is crucial for debiasing.
We propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples.
arXiv Detail & Related papers (2021-07-03T08:03:25Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Robust and On-the-fly Dataset Denoising for Image Classification [72.10311040730815]
On-the-fly Data Denoising (ODD) is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training.
ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.
arXiv Detail & Related papers (2020-03-24T03:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.