Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting
- URL: http://arxiv.org/abs/2209.01334v1
- Date: Sat, 3 Sep 2022 06:00:31 GMT
- Title: Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting
- Authors: Chen-Chen Zong, Zheng-Tao Cao, Hong-Tao Guo, Yun Du, Ming-Kun Xie,
Shao-Yuan Li, and Sheng-Jun Huang
- Abstract summary: Deep neural networks trained with standard cross-entropy loss are more prone to noisy labels.
Negative learning using complementary labels is more robust when noisy labels intervene but with an extremely slow model convergence speed.
In this paper, we first introduce a bidirectional learning scheme, where positive learning ensures convergence speed while negative learning robustly copes with label noise.
- Score: 28.493837430606117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks trained with standard cross-entropy loss are more prone
to memorize noisy labels, which degrades their performance. Negative learning
using complementary labels is more robust when noisy labels intervene but with
an extremely slow model convergence speed. In this paper, we first introduce a
bidirectional learning scheme, where positive learning ensures convergence
speed while negative learning robustly copes with label noise. Further, a
dynamic sample reweighting strategy is proposed to globally weaken the effect
of noise-labeled samples by exploiting the excellent discriminatory ability of
negative learning on the sample probability distribution. In addition, we
combine self-distillation to further improve the model performance. The code is
available at \url{https://github.com/chenchenzong/BLDR}.
Related papers
- Robust Partial-Label Learning by Leveraging Class Activation Values [0.0]
Real-world training data is often noisy; for example, human annotators assign conflicting class labels to the same instances.
We propose a novel method based on subjective logic, which explicitly represents uncertainty by leveraging the magnitudes of the underlying neural network's class activation values.
We empirically show that our method yields more robust predictions in terms of predictive performance under high noise levels.
arXiv Detail & Related papers (2025-02-17T12:30:05Z) - Mitigating Instance-Dependent Label Noise: Integrating Self-Supervised Pretraining with Pseudo-Label Refinement [3.272177633069322]
Real-world datasets often contain noisy labels due to human error, ambiguity, or resource constraints during the annotation process.
We propose a novel framework that combines self-supervised learning using SimCLR with iterative pseudo-label refinement.
Our approach significantly outperforms several state-of-the-art methods, particularly under high noise conditions.
arXiv Detail & Related papers (2024-12-06T09:56:49Z) - Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection [82.43311784594384]
Real-world datasets contain not only noisy labels but also class imbalance.
We propose a simple yet effective method to address noisy labels in imbalanced datasets.
arXiv Detail & Related papers (2024-02-17T10:34:53Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Combating Label Noise With A General Surrogate Model For Sample Selection [77.45468386115306]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels [44.133307197696446]
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
arXiv Detail & Related papers (2022-06-27T02:45:09Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.