Temporal Calibrated Regularization for Robust Noisy Label Learning
- URL: http://arxiv.org/abs/2007.00240v1
- Date: Wed, 1 Jul 2020 04:48:49 GMT
- Title: Temporal Calibrated Regularization for Robust Noisy Label Learning
- Authors: Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-tao Xia
- Abstract summary: Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
- Score: 60.90967240168525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) exhibit great success on many tasks with the help
of large-scale well annotated datasets. However, labeling large-scale data can
be very costly and error-prone so that it is difficult to guarantee the
annotation quality (i.e., having noisy labels). Training on these noisy labeled
datasets may adversely deteriorate their generalization performance. Existing
methods either rely on complex training stage division or bring too much
computation for marginal performance improvement. In this paper, we propose a
Temporal Calibrated Regularization (TCR), in which we utilize the original
labels and the predictions in the previous epoch together to make DNN inherit
the simple pattern it has learned with little overhead. We conduct extensive
experiments on various neural network architectures and datasets, and find that
it consistently enhances the robustness of DNNs to label noise.
Related papers
- An Embedding is Worth a Thousand Noisy Labels [0.11999555634662634]
We propose WANN, a weighted Adaptive Nearest Neighbor approach to address label noise.
We show WANN outperforms reference methods on diverse datasets of varying size and under various noise types and severities.
Our approach, emphasizing efficiency and explainability, emerges as a simple, robust solution to overcome the inherent limitations of deep neural network training.
arXiv Detail & Related papers (2024-08-26T15:32:31Z) - Stochastic Restarting to Overcome Overfitting in Neural Networks with Noisy Labels [2.048226951354646]
We show that restarting from a checkpoint can significantly improve generalization performance when training deep neural networks (DNNs) with noisy labels.
We develop a method based on restarting, which has been actively explored in the statistical physics field for finding targets efficiently.
An important aspect of our method is its ease of implementation and compatibility with other methods, while still yielding notably improved performance.
arXiv Detail & Related papers (2024-06-01T10:45:41Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels [44.133307197696446]
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
arXiv Detail & Related papers (2022-06-27T02:45:09Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - KNN-enhanced Deep Learning Against Noisy Labels [4.765948508271371]
Supervised learning on Deep Neural Networks (DNNs) is data hungry.
In this work, we propose to apply deep KNN for label cleanup.
We iteratively train the neural network and update labels to simultaneously proceed towards higher label recovery rate and better classification performance.
arXiv Detail & Related papers (2020-12-08T05:21:29Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.