Deep Image Retrieval is not Robust to Label Noise
- URL: http://arxiv.org/abs/2205.11195v1
- Date: Mon, 23 May 2022 11:04:09 GMT
- Title: Deep Image Retrieval is not Robust to Label Noise
- Authors: Stanislav Dereka, Ivan Karpukhin, Sergey Kolesnikov
- Abstract summary: We show that image retrieval methods are less robust to label noise than image classification ones.
For the first time, we investigate different types of label noise specific to image retrieval tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale datasets are essential for the success of deep learning in image
retrieval. However, manual assessment errors and semi-supervised annotation
techniques can lead to label noise even in popular datasets. As previous works
primarily studied annotation quality in image classification tasks, it is still
unclear how label noise affects deep learning approaches to image retrieval. In
this work, we show that image retrieval methods are less robust to label noise
than image classification ones. Furthermore, we, for the first time,
investigate different types of label noise specific to image retrieval tasks
and study their effect on model performance.
Related papers
- Improving Medical Image Classification in Noisy Labels Using Only
Self-supervised Pretraining [9.01547574908261]
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors.
In this work, we explore contrastive and pretext task-based self-supervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels.
Our results show that models with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
arXiv Detail & Related papers (2023-08-08T19:45:06Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Learning with Label Noise for Image Retrieval by Selecting Interactions [2.0881411175861726]
We propose a noise-resistant method for image retrieval named Teacher-based Selection of Interactions, T-SINT.
It selects correct positive and negative interactions to be considered in the retrieval loss by using a teacher-based training setup.
It consistently outperforms state-of-the-art methods on high noise rates across benchmark datasets with synthetic noise and more realistic noise.
arXiv Detail & Related papers (2021-12-20T11:27:48Z) - Superpixel-guided Iterative Learning from Noisy Labels for Medical Image
Segmentation [24.557755528031453]
We develop a robust iterative learning strategy that combines noise-aware training of segmentation network and noisy label refinement.
Experiments on two benchmarks show that our method outperforms recent state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-21T14:27:36Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Distilling effective supervision for robust medical image segmentation
with noisy labels [21.68138582276142]
We propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels.
In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation.
We present an image-level robust learning method to accommodate more information as the complements to pixel-level learning.
arXiv Detail & Related papers (2021-06-21T13:33:38Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.