Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy
Label Learning
- URL: http://arxiv.org/abs/2403.02690v1
- Date: Tue, 5 Mar 2024 06:20:49 GMT
- Title: Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy
Label Learning
- Authors: HeeSun Bae, Seungjae Shin, Byeonghu Na, Il-Chul Moon
- Abstract summary: We propose a new utilization method based on resampling, coined RENT.
Rent consistently outperforms existing transition matrix utilization methods, which includes reweighting, on various benchmark datasets.
- Score: 18.688802981093644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For learning with noisy labels, the transition matrix, which explicitly
models the relation between noisy label distribution and clean label
distribution, has been utilized to achieve the statistical consistency of
either the classifier or the risk. Previous researches have focused more on how
to estimate this transition matrix well, rather than how to utilize it. We
propose good utilization of the transition matrix is crucial and suggest a new
utilization method based on resampling, coined RENT. Specifically, we first
demonstrate current utilizations can have potential limitations for
implementation. As an extension to Reweighting, we suggest the Dirichlet
distribution-based per-sample Weight Sampling (DWS) framework, and compare
reweighting and resampling under DWS framework. With the analyses from DWS, we
propose RENT, a REsampling method with Noise Transition matrix. Empirically,
RENT consistently outperforms existing transition matrix utilization methods,
which includes reweighting, on various benchmark datasets. Our code is
available at \url{https://github.com/BaeHeeSun/RENT}.
Related papers
- Estimating Instance-dependent Label-noise Transition Matrix using DNNs [66.29979882301265]
In label-noise learning, estimating the transition matrix is a hot topic.
In this paper, we study to directly model the transition from Bayes optimal distribution to noisy distribution.
By exploiting the advantages, we estimate the Bayes label transition matrix by employing a deep neural network in a parameterized way.
arXiv Detail & Related papers (2021-05-27T08:36:54Z) - Learning Noise Transition Matrix from Only Noisy Labels via Total
Variation Regularization [88.91872713134342]
We propose a theoretically grounded method that can estimate the noise transition matrix and learn a classifier simultaneously.
We show the effectiveness of the proposed method through experiments on benchmark and real-world datasets.
arXiv Detail & Related papers (2021-02-04T05:09:18Z) - Provably End-to-end Label-Noise Learning without Anchor Points [118.97592870124937]
We propose an end-to-end framework for solving label-noise learning without anchor points.
Our proposed framework can identify the transition matrix if the clean class-posterior probabilities are sufficiently scattered.
arXiv Detail & Related papers (2021-02-04T03:59:37Z) - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels [86.5943044285146]
The label noise transition matrix $T$ reflects the probabilities that true labels flip into noisy ones.
In this paper, we focus on learning under the mixed closed-set and open-set label noise.
Our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.
arXiv Detail & Related papers (2020-12-02T02:42:45Z) - Dual T: Reducing Estimation Error for Transition Matrix in Label-noise
Learning [157.2709657207203]
Existing methods for estimating the transition matrix rely heavily on estimating the noisy class posterior.
We introduce an intermediate class to avoid directly estimating the noisy class posterior.
By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.
arXiv Detail & Related papers (2020-06-14T05:48:20Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.