Learning from Noisy Labels with Noise Modeling Network
- URL: http://arxiv.org/abs/2005.00596v1
- Date: Fri, 1 May 2020 20:32:22 GMT
- Title: Learning from Noisy Labels with Noise Modeling Network
- Authors: Zhuolin Jiang, Jan Silovsky, Man-Hung Siu, William Hartmann, Herbert
Gish, Sancar Adali
- Abstract summary: Noise Modeling Network (NMN) follows our convolutional neural network (CNN) and integrates with it.
NMN learns the distribution of noise patterns directly from the noisy data.
We show that the integrated NMN/CNN learning system consistently improves the classification performance.
- Score: 7.523041606515877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label image classification has generated significant interest in recent
years and the performance of such systems often suffers from the not so
infrequent occurrence of incorrect or missing labels in the training data. In
this paper, we extend the state-of the-art of training classifiers to jointly
deal with both forms of errorful data. We accomplish this by modeling noisy and
missing labels in multi-label images with a new Noise Modeling Network (NMN)
that follows our convolutional neural network (CNN), integrates with it,
forming an end-to-end deep learning system, which can jointly learn the noise
distribution and CNN parameters. The NMN learns the distribution of noise
patterns directly from the noisy data without the need for any clean training
data. The NMN can model label noise that depends only on the true label or is
also dependent on the image features. We show that the integrated NMN/CNN
learning system consistently improves the classification performance, for
different levels of label noise, on the MSR-COCO dataset and MSR-VTT dataset.
We also show that noise performance improvements are obtained when multiple
instance learning methods are used.
Related papers
- Learning Confident Classifiers in the Presence of Label Noise [5.829762367794509]
This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
arXiv Detail & Related papers (2023-01-02T04:27:25Z) - Learning advisor networks for noisy image classification [22.77447144331876]
We introduce the novel concept of advisor network to address the problem of noisy labels in image classification.
We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model.
We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
arXiv Detail & Related papers (2022-11-08T11:44:08Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Instance-Dependent Noisy Label Learning via Graphical Modelling [30.922188228545906]
Noisy labels are troublesome in the ecosystem of deep learning because models can easily overfit them.
We present a new graphical modelling approach called InstanceGM that combines discriminative and generative models.
arXiv Detail & Related papers (2022-09-02T09:27:37Z) - Synergistic Network Learning and Label Correction for Noise-robust Image
Classification [28.27739181560233]
Deep Neural Networks (DNNs) tend to overfit training label noise, resulting in poorer model performance in practice.
We propose a robust label correction framework combining the ideas of small loss selection and noise correction.
We demonstrate our method on both synthetic and real-world datasets with different noise types and rates.
arXiv Detail & Related papers (2022-02-27T23:06:31Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning [112.95742995816367]
We propose a new few-shot fewshot learning setting termed FSFSL.
Under FSFSL, both the source and target classes have limited training samples.
We also propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove irrelevant images.
arXiv Detail & Related papers (2020-02-28T10:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.