Influential Rank: A New Perspective of Post-training for Robust Model
against Noisy Labels
- URL: http://arxiv.org/abs/2106.07217v4
- Date: Wed, 19 Apr 2023 05:33:58 GMT
- Title: Influential Rank: A New Perspective of Post-training for Robust Model
against Noisy Labels
- Authors: Seulki Park, Hwanjun Song, Daeho Um, Dae Ung Jo, Sangdoo Yun, and Jin
Young Choi
- Abstract summary: We propose a new approach for learning from noisy labels (LNL) via post-training.
We exploit the overfitting property of a trained model to identify mislabeled samples.
Our post-training approach creates great synergies when combined with the existing LNL methods.
- Score: 23.80449026013167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network can easily overfit to even noisy labels due to its high
capacity, which degrades the generalization performance of a model. To overcome
this issue, we propose a new approach for learning from noisy labels (LNL) via
post-training, which can significantly improve the generalization performance
of any pre-trained model on noisy label data. To this end, we rather exploit
the overfitting property of a trained model to identify mislabeled samples.
Specifically, our post-training approach gradually removes samples with high
influence on the decision boundary and refines the decision boundary to improve
generalization performance. Our post-training approach creates great synergies
when combined with the existing LNL methods. Experimental results on various
real-world and synthetic benchmark datasets demonstrate the validity of our
approach in diverse realistic scenarios.
Related papers
- Foster Adaptivity and Balance in Learning with Noisy Labels [26.309508654960354]
We propose a novel approach named textbfSED to deal with label noise in a textbfSelf-adaptivtextbfE and class-balancetextbfD manner.
A mean-teacher model is then employed to correct labels of noisy samples.
We additionally propose a self-adaptive and class-balanced sample re-weighting mechanism to assign different weights to detected noisy samples.
arXiv Detail & Related papers (2024-07-03T03:10:24Z) - Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection [82.43311784594384]
Real-world datasets contain not only noisy labels but also class imbalance.
We propose a simple yet effective method to address noisy labels in imbalanced datasets.
arXiv Detail & Related papers (2024-02-17T10:34:53Z) - Analyze the Robustness of Classifiers under Label Noise [5.708964539699851]
Label noise in supervised learning, characterized by erroneous or imprecise labels, significantly impairs model performance.
This research focuses on the increasingly pertinent issue of label noise's impact on practical applications.
arXiv Detail & Related papers (2023-12-12T13:51:25Z) - Robust Feature Learning Against Noisy Labels [0.2082426271304908]
Mislabeled samples can significantly degrade the generalization of models.
progressive self-bootstrapping is introduced to minimize the negative impact of supervision from noisy labels.
Experimental results show that our proposed method can efficiently and effectively enhance model robustness under severely noisy labels.
arXiv Detail & Related papers (2023-07-10T02:55:35Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Feature Diversity Learning with Sample Dropout for Unsupervised Domain
Adaptive Person Re-identification [0.0]
This paper proposes a new approach to learn the feature representation with better generalization ability through limiting noisy pseudo labels.
We put forward a brand-new method referred as to Feature Diversity Learning (FDL) under the classic mutual-teaching architecture.
Experimental results show that our proposed FDL-SD achieves the state-of-the-art performance on multiple benchmark datasets.
arXiv Detail & Related papers (2022-01-25T10:10:48Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Deep k-NN for Noisy Labels [55.97221021252733]
We show that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled data and produce more accurate models than many recently proposed methods.
arXiv Detail & Related papers (2020-04-26T05:15:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.