CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging
Top-Down Feedback
- URL: http://arxiv.org/abs/2210.09505v1
- Date: Tue, 18 Oct 2022 00:54:40 GMT
- Title: CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging
Top-Down Feedback
- Authors: Alexia Jolicoeur-Martineau, Alex Lamb, Vikas Verma, Aniket Didolkar
- Abstract summary: We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT)
This approach consists in conditioning the model on a noisy version of the target(s) at a random noise level.
At inference time, since we do not know the target, we run the network with only noise in place of the noisy target.
- Score: 25.964963416932573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel regularizer for supervised learning called Conditioning on
Noisy Targets (CNT). This approach consists in conditioning the model on a
noisy version of the target(s) (e.g., actions in imitation learning or labels
in classification) at a random noise level (from small to large noise). At
inference time, since we do not know the target, we run the network with only
noise in place of the noisy target. CNT provides hints through the noisy label
(with less noise, we can more easily infer the true target). This give two main
benefits: 1) the top-down feedback allows the model to focus on simpler and
more digestible sub-problems and 2) rather than learning to solve the task from
scratch, the model will first learn to master easy examples (with less noise),
while slowly progressing toward harder examples (with more noise).
Related papers
- One-step Noisy Label Mitigation [86.57572253460125]
Mitigating the detrimental effects of noisy labels on the training process has become increasingly critical.
We propose One-step Anti-Noise (OSA), a model-agnostic noisy label mitigation paradigm.
We empirically demonstrate the superiority of OSA, highlighting its enhanced training robustness, improved task transferability, ease of deployment, and reduced computational costs.
arXiv Detail & Related papers (2024-10-02T18:42:56Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Positive-incentive Noise [91.3755431537592]
Noise is conventionally viewed as a severe problem in diverse fields, e.g., engineering, learning systems.
This paper aims to investigate whether the conventional proposition always holds.
$pi$-noise offers new explanations for some models and provides a new principle for some fields, such as multi-task learning, adversarial training, etc.
arXiv Detail & Related papers (2022-12-19T15:33:34Z) - Identifying Hard Noise in Long-Tailed Sample Distribution [76.16113794808001]
We introduce Noisy Long-Tailed Classification (NLT)
Most de-noising methods fail to identify the hard noises.
We design an iterative noisy learning framework called Hard-to-Easy (H2E)
arXiv Detail & Related papers (2022-07-27T09:03:03Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - LongReMix: Robust Learning with High Confidence Samples in a Noisy Label
Environment [33.376639002442914]
We propose the new 2-stage noisy-label training algorithm LongReMix.
We test LongReMix on the noisy-label benchmarks CIFAR-10, CIFAR-100, WebVision, Clothing1M, and Food101-N.
Our approach achieves state-of-the-art performance in most datasets.
arXiv Detail & Related papers (2021-03-06T18:48:40Z) - Towards Robustness to Label Noise in Text Classification via Noise
Modeling [7.863638253070439]
Large datasets in NLP suffer from noisy labels, due to erroneous automatic and human annotation procedures.
We study the problem of text classification with label noise, and aim to capture this noise through an auxiliary noise model over the classifier.
arXiv Detail & Related papers (2021-01-27T05:41:57Z) - Towards Noise-resistant Object Detection with Noisy Annotations [119.63458519946691]
Training deep object detectors requires significant amount of human-annotated images with accurate object labels and bounding box coordinates.
Noisy annotations are much more easily accessible, but they could be detrimental for learning.
We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise.
arXiv Detail & Related papers (2020-03-03T01:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.