Robust Temporal Ensembling for Learning with Noisy Labels
- URL: http://arxiv.org/abs/2109.14563v1
- Date: Wed, 29 Sep 2021 16:59:36 GMT
- Title: Robust Temporal Ensembling for Learning with Noisy Labels
- Authors: Abel Brown, Benedikt Schifferer, Robert DiPietro
- Abstract summary: We present robust temporal ensembling (RTE), which combines robust loss with semi-supervised regularization methods to achieve noise-robust learning.
RTE achieves state-of-the-art performance across the CIFAR-10, CIFAR-100, ImageNet, WebVision, and Food-101N datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Successful training of deep neural networks with noisy labels is an essential
capability as most real-world datasets contain some amount of mislabeled data.
Left unmitigated, label noise can sharply degrade typical supervised learning
approaches. In this paper, we present robust temporal ensembling (RTE), which
combines robust loss with semi-supervised regularization methods to achieve
noise-robust learning. We demonstrate that RTE achieves state-of-the-art
performance across the CIFAR-10, CIFAR-100, ImageNet, WebVision, and Food-101N
datasets, while forgoing the recent trend of label filtering and/or fixing.
Finally, we show that RTE also retains competitive corruption robustness to
unforeseen input noise using CIFAR-10-C, obtaining a mean corruption error
(mCE) of 13.50% even in the presence of an 80% noise ratio, versus 26.9% mCE
with standard methods on clean data.
Related papers
- A Study on the Impact of Data Augmentation for Training Convolutional
Neural Networks in the Presence of Noisy Labels [14.998309259808236]
Label noise is common in large real-world datasets, and its presence harms the training process of deep neural networks.
We evaluate the impact of data augmentation as a design choice for training deep neural networks.
We show that the appropriate selection of data augmentation can drastically improve the model robustness to label noise.
arXiv Detail & Related papers (2022-08-23T20:04:17Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - Reliable Label Correction is a Good Booster When Learning with Extremely
Noisy Labels [65.79898033530408]
We introduce a novel framework, termed as LC-Booster, to explicitly tackle learning under extreme noise.
LC-Booster incorporates label correction into the sample selection, so that more purified samples, through the reliable label correction, can be utilized for training.
Experiments show that LC-Booster advances state-of-the-art results on several noisy-label benchmarks.
arXiv Detail & Related papers (2022-04-30T07:19:03Z) - CNLL: A Semi-supervised Approach For Continual Noisy Label Learning [12.341250124228859]
We propose a simple purification technique to effectively cleanse the online data stream that is both cost-effective and more accurate.
After purification, we perform fine-tuning in a semi-supervised fashion that ensures the participation of all available samples.
We achieve a 24.8% performance gain for CIFAR10 with 20% noise over previous SOTA methods.
arXiv Detail & Related papers (2022-04-21T05:01:10Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Consistency Regularization Can Improve Robustness to Label Noise [4.340338299803562]
This paper empirically studies the relevance of consistency regularization for training-time robustness to noisy labels.
We show that a simple loss function that encourages consistency improves the robustness of the models to label noise.
arXiv Detail & Related papers (2021-10-04T08:15:08Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Boosting Semi-Supervised Face Recognition with Noise Robustness [54.342992887966616]
This paper presents an effective solution to semi-supervised face recognition that is robust to the label noise aroused by the auto-labelling.
We develop a semi-supervised face recognition solution, named Noise Robust Learning-Labelling (NRoLL), which is based on the robust training ability empowered by GN.
arXiv Detail & Related papers (2021-05-10T14:43:11Z) - Contrastive Learning Improves Model Robustness Under Label Noise [3.756550107432323]
We show that by initializing supervised robust methods using representations learned through contrastive learning leads to significantly improved performance under label noise.
Even the simplest method can outperform the state-of-the-art SSL method by more than 50% under high label noise when with contrastive learning.
arXiv Detail & Related papers (2021-04-19T00:27:58Z) - Augmentation Strategies for Learning with Noisy Labels [3.698228929379249]
We evaluate different augmentation strategies for algorithms tackling the "learning with noisy labels" problem.
We find that using one set of augmentations for loss modeling tasks and another set for learning is the most effective.
We introduce this augmentation strategy to the state-of-the-art technique and demonstrate that we can improve performance across all evaluated noise levels.
arXiv Detail & Related papers (2021-03-03T02:19:35Z) - Coresets for Robust Training of Neural Networks against Noisy Labels [78.03027938765746]
We propose a novel approach with strong theoretical guarantees for robust training of deep networks trained with noisy labels.
We select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix.
Our experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance compared to state-of-the art.
arXiv Detail & Related papers (2020-11-15T04:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.