Multi-Objective Interpolation Training for Robustness to Label Noise
- URL: http://arxiv.org/abs/2012.04462v2
- Date: Thu, 18 Mar 2021 07:44:28 GMT
- Title: Multi-Objective Interpolation Training for Robustness to Label Noise
- Authors: Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor and Kevin
McGuinness
- Abstract summary: We show that standard supervised contrastive learning degrades in the presence of label noise.
We propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning.
Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results.
- Score: 17.264550056296915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks trained with standard cross-entropy loss memorize noisy
labels, which degrades their performance. Most research to mitigate this
memorization proposes new robust classification loss functions. Conversely, we
propose a Multi-Objective Interpolation Training (MOIT) approach that jointly
exploits contrastive learning and classification to mutually help each other
and boost performance against label noise. We show that standard supervised
contrastive learning degrades in the presence of label noise and propose an
interpolation training strategy to mitigate this behavior. We further propose a
novel label noise detection method that exploits the robust feature
representations learned via contrastive learning to estimate per-sample
soft-labels whose disagreements with the original labels accurately identify
noisy samples. This detection allows treating noisy samples as unlabeled and
training a classifier in a semi-supervised manner to prevent noise memorization
and improve representation learning. We further propose MOIT+, a refinement of
MOIT by fine-tuning on detected clean samples. Hyperparameter and ablation
studies verify the key components of our method. Experiments on synthetic and
real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves
state-of-the-art results. Code is available at https://git.io/JI40X.
Related papers
- Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Label Noise-Robust Learning using a Confidence-Based Sieving Strategy [15.997774467236352]
In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge.
Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.
We propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to differentiate between the clean and noisy samples effectively.
arXiv Detail & Related papers (2022-10-11T10:47:28Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - Context-based Virtual Adversarial Training for Text Classification with
Noisy Labels [1.9508698179748525]
We propose context-based virtual adversarial training (ConVAT) to prevent a text classifier from overfitting to noisy labels.
Unlike the previous works, the proposed method performs the adversarial training at the context level rather than the inputs.
We conduct extensive experiments on four text classification datasets with two types of label noises.
arXiv Detail & Related papers (2022-05-29T14:19:49Z) - UNICON: Combating Label Noise Through Uniform Selection and Contrastive
Learning [89.56465237941013]
We propose UNICON, a simple yet effective sample selection method which is robust to high label noise.
We obtain an 11.4% improvement over the current state-of-the-art on CIFAR100 dataset with a 90% noise rate.
arXiv Detail & Related papers (2022-03-28T07:36:36Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Contrastive Learning Improves Model Robustness Under Label Noise [3.756550107432323]
We show that by initializing supervised robust methods using representations learned through contrastive learning leads to significantly improved performance under label noise.
Even the simplest method can outperform the state-of-the-art SSL method by more than 50% under high label noise when with contrastive learning.
arXiv Detail & Related papers (2021-04-19T00:27:58Z) - Noise-resistant Deep Metric Learning with Ranking-based Instance
Selection [59.286567680389766]
We propose a noise-resistant training technique for DML, which we name Probabilistic Ranking-based Instance Selection with Memory (PRISM)
PRISM identifies noisy data in a minibatch using average similarity against image features extracted from several previous versions of the neural network.
To alleviate the high computational cost brought by the memory bank, we introduce an acceleration method that replaces individual data points with the class centers.
arXiv Detail & Related papers (2021-03-30T03:22:17Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.