Pre-train to Gain: Robust Learning Without Clean Labels
- URL: http://arxiv.org/abs/2511.20844v1
- Date: Tue, 25 Nov 2025 20:48:07 GMT
- Title: Pre-train to Gain: Robust Learning Without Clean Labels
- Authors: David Szczecina, Nicholas Pellegrino, Paul Fieguth,
- Abstract summary: Training deep networks with noisy labels leads to poor generalization and degraded accuracy.<n>By pre-training a feature extractor backbone without labels, we can train a more noise robust model without requiring a subset with clean labels.<n>Our approach achieves comparable results to ImageNet pre-trained models at low noise levels, while substantially outperforming them under high noise conditions.
- Score: 1.1582652820340928
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Training deep networks with noisy labels leads to poor generalization and degraded accuracy due to overfitting to label noise. Existing approaches for learning with noisy labels often rely on the availability of a clean subset of data. By pre-training a feature extractor backbone without labels using self-supervised learning (SSL), followed by standard supervised training on the noisy dataset, we can train a more noise robust model without requiring a subset with clean labels. We evaluate the use of SimCLR and Barlow~Twins as SSL methods on CIFAR-10 and CIFAR-100 under synthetic and real world noise. Across all noise rates, self-supervised pre-training consistently improves classification accuracy and enhances downstream label-error detection (F1 and Balanced Accuracy). The performance gap widens as the noise rate increases, demonstrating improved robustness. Notably, our approach achieves comparable results to ImageNet pre-trained models at low noise levels, while substantially outperforming them under high noise conditions.
Related papers
- How Does Label Noise Gradient Descent Improve Generalization in the Low SNR Regime? [78.0226274470175]
We investigate whether introducing label noise to the gradient updates can enhance the test performance of neural network (NN)<n>We prove that adding label noise during training suppresses noise memorization, preventing it from dominating the learning process.<n>In contrast, we show that NN trained with standard GD tends to overfit to noise in the same low SNR setting.
arXiv Detail & Related papers (2025-10-20T13:28:13Z) - Detect and Correct: A Selective Noise Correction Method for Learning with Noisy Labels [14.577138753507203]
Falsely annotated samples, also known as noisy labels, can significantly harm the performance of deep learning models.<n>Two main approaches for learning with noisy labels are global noise estimation and data filtering.<n>Our method identifies potentially noisy samples based on their loss distribution.<n>We then apply a selection process to separate noisy and clean samples and learn a noise transition matrix to correct the loss for noisy samples while leaving the clean data unaffected.
arXiv Detail & Related papers (2025-05-19T16:49:27Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Class Prototype-based Cleaner for Label Noise Learning [73.007001454085]
Semi-supervised learning methods are current SOTA solutions to the noisy-label learning problem.
We propose a simple yet effective solution, named textbfClass textbfPrototype-based label noise textbfCleaner.
arXiv Detail & Related papers (2022-12-21T04:56:41Z) - Identifying Hard Noise in Long-Tailed Sample Distribution [71.8462682319137]
We introduce Noisy Long-Tailed Classification (NLT)<n>Most de-noising methods fail to identify the hard noises.<n>We design an iterative noisy learning framework called Hard-to-Easy (H2E)
arXiv Detail & Related papers (2022-07-27T09:03:03Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - Contrastive Learning Improves Model Robustness Under Label Noise [3.756550107432323]
We show that by initializing supervised robust methods using representations learned through contrastive learning leads to significantly improved performance under label noise.
Even the simplest method can outperform the state-of-the-art SSL method by more than 50% under high label noise when with contrastive learning.
arXiv Detail & Related papers (2021-04-19T00:27:58Z) - LongReMix: Robust Learning with High Confidence Samples in a Noisy Label
Environment [33.376639002442914]
We propose the new 2-stage noisy-label training algorithm LongReMix.
We test LongReMix on the noisy-label benchmarks CIFAR-10, CIFAR-100, WebVision, Clothing1M, and Food101-N.
Our approach achieves state-of-the-art performance in most datasets.
arXiv Detail & Related papers (2021-03-06T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.