Latent Class-Conditional Noise Model
- URL: http://arxiv.org/abs/2302.09595v1
- Date: Sun, 19 Feb 2023 15:24:37 GMT
- Title: Latent Class-Conditional Noise Model
- Authors: Jiangchao Yao, Bo Han, Zhihan Zhou, Ya Zhang, Ivor W. Tsang
- Abstract summary: We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
- Score: 54.56899309997246
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning with noisy labels has become imperative in the Big Data era, which
saves expensive human labors on accurate annotations. Previous
noise-transition-based methods have achieved theoretically-grounded performance
under the Class-Conditional Noise model (CCN). However, these approaches builds
upon an ideal but impractical anchor set available to pre-estimate the noise
transition. Even though subsequent works adapt the estimation as a neural
layer, the ill-posed stochastic learning of its parameters in back-propagation
easily falls into undesired local minimums. We solve this problem by
introducing a Latent Class-Conditional Noise model (LCCN) to parameterize the
noise transition under a Bayesian framework. By projecting the noise transition
into the Dirichlet space, the learning is constrained on a simplex
characterized by the complete dataset, instead of some ad-hoc parametric space
wrapped by the neural layer. We then deduce a dynamic label regression method
for LCCN, whose Gibbs sampler allows us efficiently infer the latent true
labels to train the classifier and to model the noise. Our approach safeguards
the stable update of the noise transition, which avoids previous arbitrarily
tuning from a mini-batch of samples. We further generalize LCCN to different
counterparts compatible with open-set noisy labels, semi-supervised learning as
well as cross-model training. A range of experiments demonstrate the advantages
of LCCN and its variants over the current state-of-the-art methods.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Instance-dependent Noisy-label Learning with Graphical Model Based Noise-rate Estimation [16.283722126438125]
Label Noise Learning (LNL) incorporates a sample selection stage to differentiate clean and noisy-label samples.
Such curriculum is sub-optimal since it does not consider the actual label noise rate in the training set.
This paper addresses this issue with a new noise-rate estimation method that is easily integrated with most state-of-the-art (SOTA) LNL methods.
arXiv Detail & Related papers (2023-05-31T01:46:14Z) - Learning from Noisy Labels with Coarse-to-Fine Sample Credibility
Modeling [22.62790706276081]
Training deep neural network (DNN) with noisy labels is practically challenging.
Previous efforts tend to handle part or full data in a unified denoising flow.
We propose a coarse-to-fine robust learning method called CREMA to handle noisy data in a divide-and-conquer manner.
arXiv Detail & Related papers (2022-08-23T02:06:38Z) - Identifying Hard Noise in Long-Tailed Sample Distribution [76.16113794808001]
We introduce Noisy Long-Tailed Classification (NLT)
Most de-noising methods fail to identify the hard noises.
We design an iterative noisy learning framework called Hard-to-Easy (H2E)
arXiv Detail & Related papers (2022-07-27T09:03:03Z) - Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification [4.75542005200538]
We introduce a novel hard sample aware noise robust learning method for histopathology image classification.
To distinguish the informative hard samples from the harmful noisy ones, we build an easy/hard/noisy (EHN) detection model.
We propose a noise suppressing and hard enhancing (NSHE) scheme to train the noise robust model.
arXiv Detail & Related papers (2021-12-05T11:07:55Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - GANs for learning from very high class conditional noisy labels [1.6516902135723865]
We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification.
It first generates a set of correctly labelled data points from noisy labelled data and 0.1% or 1% clean labels.
arXiv Detail & Related papers (2020-10-19T15:01:11Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.