CNLL: A Semi-supervised Approach For Continual Noisy Label Learning
- URL: http://arxiv.org/abs/2204.09881v1
- Date: Thu, 21 Apr 2022 05:01:10 GMT
- Title: CNLL: A Semi-supervised Approach For Continual Noisy Label Learning
- Authors: Nazmul Karim, Umar Khalid, Ashkan Esmaeili and Nazanin Rahnavard
- Abstract summary: We propose a simple purification technique to effectively cleanse the online data stream that is both cost-effective and more accurate.
After purification, we perform fine-tuning in a semi-supervised fashion that ensures the participation of all available samples.
We achieve a 24.8% performance gain for CIFAR10 with 20% noise over previous SOTA methods.
- Score: 12.341250124228859
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The task of continual learning requires careful design of algorithms that can
tackle catastrophic forgetting. However, the noisy label, which is inevitable
in a real-world scenario, seems to exacerbate the situation. While very few
studies have addressed the issue of continual learning under noisy labels, long
training time and complicated training schemes limit their applications in most
cases. In contrast, we propose a simple purification technique to effectively
cleanse the online data stream that is both cost-effective and more accurate.
After purification, we perform fine-tuning in a semi-supervised fashion that
ensures the participation of all available samples. Training in this fashion
helps us learn a better representation that results in state-of-the-art (SOTA)
performance. Through extensive experimentation on 3 benchmark datasets, MNIST,
CIFAR10 and CIFAR100, we show the effectiveness of our proposed approach. We
achieve a 24.8% performance gain for CIFAR10 with 20% noise over previous SOTA
methods. Our code is publicly available.
Related papers
- Adaptive Retention & Correction: Test-Time Training for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - A soft nearest-neighbor framework for continual semi-supervised learning [35.957577587090604]
We propose an approach for continual semi-supervised learning where not all the data samples are labeled.
We leverage the power of nearest-neighbors to nonlinearly partition the feature space and flexibly model the underlying data distribution.
Our method works well on both low and high resolution images and scales seamlessly to more complex datasets.
arXiv Detail & Related papers (2022-12-09T20:03:59Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - UNICON: Combating Label Noise Through Uniform Selection and Contrastive
Learning [89.56465237941013]
We propose UNICON, a simple yet effective sample selection method which is robust to high label noise.
We obtain an 11.4% improvement over the current state-of-the-art on CIFAR100 dataset with a 90% noise rate.
arXiv Detail & Related papers (2022-03-28T07:36:36Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Learning with Noisy Labels Revisited: A Study Using Real-World Human
Annotations [54.400167806154535]
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N)
We show that real-world noisy labels follow an instance-dependent pattern rather than the classically adopted class-dependent ones.
arXiv Detail & Related papers (2021-10-22T22:42:11Z) - Robust Temporal Ensembling for Learning with Noisy Labels [0.0]
We present robust temporal ensembling (RTE), which combines robust loss with semi-supervised regularization methods to achieve noise-robust learning.
RTE achieves state-of-the-art performance across the CIFAR-10, CIFAR-100, ImageNet, WebVision, and Food-101N datasets.
arXiv Detail & Related papers (2021-09-29T16:59:36Z) - Learning From Long-Tailed Data With Noisy Labels [0.0]
Class imbalance and noisy labels are the norm in many large-scale classification datasets.
We present a simple two-stage approach based on recent advances in self-supervised learning.
We find that self-supervised learning approaches are effectively able to cope with severe class imbalance.
arXiv Detail & Related papers (2021-08-25T07:45:40Z) - Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy
Labels [12.181548895121685]
"Contrast to Divide" (C2D) is a framework that pre-trains the feature extractor in a self-supervised fashion.
Using self-supervised pre-training boosts the performance of existing LNL approaches by drastically reducing the warm-up stage's susceptibility to noise level.
In real-life noise settings, C2D trained on mini-WebVision outperforms previous works both in WebVision and ImageNet validation sets by 3% top-1 accuracy.
arXiv Detail & Related papers (2021-03-25T07:40:51Z) - Augmentation Strategies for Learning with Noisy Labels [3.698228929379249]
We evaluate different augmentation strategies for algorithms tackling the "learning with noisy labels" problem.
We find that using one set of augmentations for loss modeling tasks and another set for learning is the most effective.
We introduce this augmentation strategy to the state-of-the-art technique and demonstrate that we can improve performance across all evaluated noise levels.
arXiv Detail & Related papers (2021-03-03T02:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.