PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for
Learning with Noisy Labels
- URL: http://arxiv.org/abs/2212.03462v1
- Date: Wed, 7 Dec 2022 05:03:13 GMT
- Title: PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for
Learning with Noisy Labels
- Authors: Huaxi Huang, Hui Kang, Sheng Liu, Olivier Salvado, Thierry
Rakotoarivelo, Dadong Wang, Tongliang Liu
- Abstract summary: Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training.
The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels.
Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets.
- Score: 49.43183579415899
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Convolutional Neural Networks (CNNs) have demonstrated superiority in
learning patterns, but are sensitive to label noises and may overfit noisy
labels during training. The early stopping strategy averts updating CNNs during
the early training phase and is widely employed in the presence of noisy
labels. Motivated by biological findings that the amplitude spectrum (AS) and
phase spectrum (PS) in the frequency domain play different roles in the
animal's vision system, we observe that PS, which captures more semantic
information, can increase the robustness of DNNs to label noise, more so than
AS can. We thus propose early stops at different times for AS and PS by
disentangling the features of some layer(s) into AS and PS using Discrete
Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe
DisentangLed Early Stopping (PADDLES) method is shown to be effective on both
synthetic and real-world label-noise datasets. PADDLES outperforms other early
stopping methods and obtains state-of-the-art performance.
Related papers
- Stochastic Restarting to Overcome Overfitting in Neural Networks with Noisy Labels [2.048226951354646]
We show that restarting from a checkpoint can significantly improve generalization performance when training deep neural networks (DNNs) with noisy labels.
We develop a method based on restarting, which has been actively explored in the statistical physics field for finding targets efficiently.
An important aspect of our method is its ease of implementation and compatibility with other methods, while still yielding notably improved performance.
arXiv Detail & Related papers (2024-06-01T10:45:41Z) - Label-Noise Robust Diffusion Models [18.82847557713331]
Conditional diffusion models have shown remarkable performance in various generative tasks.
Training them requires large-scale datasets that often contain noise in conditional inputs, a.k.a. noisy labels.
This paper proposes Transition-aware weighted Denoising Score Matching for training conditional diffusion models with noisy labels.
arXiv Detail & Related papers (2024-02-27T14:00:34Z) - Dynamics-Aware Loss for Learning with Label Noise [73.75129479936302]
Label noise poses a serious threat to deep neural networks (DNNs)
We propose a dynamics-aware loss (DAL) to solve this problem.
Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method.
arXiv Detail & Related papers (2023-03-21T03:05:21Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels [44.133307197696446]
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
arXiv Detail & Related papers (2022-06-27T02:45:09Z) - Understanding and Improving Early Stopping for Learning with Noisy
Labels [63.0730063791198]
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods.
Current methods generally decide the early stopping point by considering a DNN as a whole.
We propose to separate a DNN into different parts and progressively train them to address this problem.
arXiv Detail & Related papers (2021-06-30T07:18:00Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.