Generative Reasoning Integrated Label Noise Robust Deep Image
Representation Learning
- URL: http://arxiv.org/abs/2212.01261v3
- Date: Fri, 4 Aug 2023 11:50:14 GMT
- Title: Generative Reasoning Integrated Label Noise Robust Deep Image
Representation Learning
- Authors: Gencer Sumbul and Beg\"um Demir
- Abstract summary: We introduce a generative reasoning integrated label noise robust deep representation learning (GRID) approach.
Our approach aims to model the complementary characteristics of discriminative and generative reasoning for IRL under noisy labels.
Our approach learns discriminative image representations while preventing interference of noisy labels independently from the IRL method being selected.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of deep learning based image representation learning (IRL)
methods has attracted great attention for various image understanding problems.
Most of these methods require the availability of a high quantity and quality
of annotated training images, which can be time-consuming and costly to gather.
To reduce labeling costs, crowdsourced data, automatic labeling procedures or
citizen science projects can be considered. However, such approaches increase
the risk of including label noise in training data. It may result in
overfitting on noisy labels when discriminative reasoning is employed. This
leads to sub-optimal learning procedures, and thus inaccurate characterization
of images. To address this, we introduce a generative reasoning integrated
label noise robust deep representation learning (GRID) approach. Our approach
aims to model the complementary characteristics of discriminative and
generative reasoning for IRL under noisy labels. To this end, we first
integrate generative reasoning into discriminative reasoning through a
supervised variational autoencoder. This allows GRID to automatically detect
training samples with noisy labels. Then, through our label noise robust hybrid
representation learning strategy, GRID adjusts the whole learning procedure for
IRL of these samples through generative reasoning and that of other samples
through discriminative reasoning. Our approach learns discriminative image
representations while preventing interference of noisy labels independently
from the IRL method being selected. Thus, unlike the existing methods, GRID
does not depend on the type of annotation, neural network architecture, loss
function or learning task, and thus can be directly utilized for various
problems. Experimental results show its effectiveness compared to
state-of-the-art methods. The code of GRID is publicly available at
https://github.com/gencersumbul/GRID.
Related papers
- ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Partial Label Supervision for Agnostic Generative Noisy Label Learning [18.29334728940232]
Noisy label learning has been tackled with both discriminative and generative approaches.
We propose a novel framework for generative noisy label learning that addresses these challenges.
arXiv Detail & Related papers (2023-08-02T14:48:25Z) - Label Noise Robust Image Representation Learning based on Supervised
Variational Autoencoders in Remote Sensing [0.0]
We propose a label noise robust IRL method that aims to prevent the interference of noisy labels on IRL.
The proposed method imposes lower importance to images with noisy labels, while giving higher importance to those with correct labels.
The code of the proposed method is publicly available at https://git.tu-berlin.de/rsim/RS-IRL-SVAE.
arXiv Detail & Related papers (2023-06-14T15:22:36Z) - Instance-Dependent Noisy Label Learning via Graphical Modelling [30.922188228545906]
Noisy labels are troublesome in the ecosystem of deep learning because models can easily overfit them.
We present a new graphical modelling approach called InstanceGM that combines discriminative and generative models.
arXiv Detail & Related papers (2022-09-02T09:27:37Z) - Learning to Aggregate and Refine Noisy Labels for Visual Sentiment
Analysis [69.48582264712854]
We propose a robust learning method to perform robust visual sentiment analysis.
Our method relies on an external memory to aggregate and filter noisy labels during training.
We establish a benchmark for visual sentiment analysis with label noise using publicly available datasets.
arXiv Detail & Related papers (2021-09-15T18:18:28Z) - Annotation-Efficient Learning for Medical Image Segmentation based on
Noisy Pseudo Labels and Adversarial Learning [12.781598229608983]
We propose an annotation-efficient learning framework for medical image segmentation.
We use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks.
We validated our framework with two situations: objects with a simple shape model like optic disc in fundus images and fetal head in ultrasound images, and complex structures like lung in X-Ray images and liver in CT images.
arXiv Detail & Related papers (2020-12-29T03:22:41Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.