Dynamics-Aware Loss for Learning with Label Noise
- URL: http://arxiv.org/abs/2303.11562v3
- Date: Sat, 5 Aug 2023 07:39:03 GMT
- Title: Dynamics-Aware Loss for Learning with Label Noise
- Authors: Xiu-Chuan Li, Xiaobo Xia, Fei Zhu, Tongliang Liu, Xu-Yao Zhang,
Cheng-Lin Liu
- Abstract summary: Label noise poses a serious threat to deep neural networks (DNNs)
We propose a dynamics-aware loss (DAL) to solve this problem.
Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method.
- Score: 73.75129479936302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Label noise poses a serious threat to deep neural networks (DNNs). Employing
robust loss functions which reconcile fitting ability with robustness is a
simple but effective strategy to handle this problem. However, the widely-used
static trade-off between these two factors contradicts the dynamics of DNNs
learning with label noise, leading to inferior performance. Therefore, we
propose a dynamics-aware loss (DAL) to solve this problem. Considering that
DNNs tend to first learn beneficial patterns, then gradually overfit harmful
label noise, DAL strengthens the fitting ability initially, then gradually
improves robustness. Moreover, at the later stage, to further reduce the
negative impact of label noise and combat underfitting simultaneously, we let
DNNs put more emphasis on easy examples than hard ones and introduce a
bootstrapping term. Both the detailed theoretical analyses and extensive
experimental results demonstrate the superiority of our method. Our source code
can be found in https://github.com/XiuchuanLi/DAL.
Related papers
- Stochastic Restarting to Overcome Overfitting in Neural Networks with Noisy Labels [2.048226951354646]
We show that restarting from a checkpoint can significantly improve generalization performance when training deep neural networks (DNNs) with noisy labels.
We develop a method based on restarting, which has been actively explored in the statistical physics field for finding targets efficiently.
An important aspect of our method is its ease of implementation and compatibility with other methods, while still yielding notably improved performance.
arXiv Detail & Related papers (2024-06-01T10:45:41Z) - Robust Training of Graph Neural Networks via Noise Governance [27.767913371777247]
Graph Neural Networks (GNNs) have become widely-used models for semi-supervised learning.
In this paper, we consider an important yet challenging scenario where labels on nodes of graphs are not only noisy but also scarce.
We propose a novel RTGNN framework that achieves better robustness by learning to explicitly govern label noise.
arXiv Detail & Related papers (2022-11-12T09:25:32Z) - Understanding and Improving Early Stopping for Learning with Noisy
Labels [63.0730063791198]
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods.
Current methods generally decide the early stopping point by considering a DNN as a whole.
We propose to separate a DNN into different parts and progressively train them to address this problem.
arXiv Detail & Related papers (2021-06-30T07:18:00Z) - Learning from Noisy Labels via Dynamic Loss Thresholding [69.61904305229446]
We propose a novel method named Dynamic Loss Thresholding (DLT)
During the training process, DLT records the loss value of each sample and calculates dynamic loss thresholds.
Experiments on CIFAR-10/100 and Clothing1M demonstrate substantial improvements over recent state-of-the-art methods.
arXiv Detail & Related papers (2021-04-01T07:59:03Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - How benign is benign overfitting? [96.07549886487526]
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
Deep neural networks essentially achieve zero training error, even in the presence of label noise.
We identify label noise as one of the causes for adversarial vulnerability.
arXiv Detail & Related papers (2020-07-08T11:07:10Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.