Deep Learning Classification With Noisy Labels
- URL: http://arxiv.org/abs/2004.11116v1
- Date: Thu, 23 Apr 2020 13:02:45 GMT
- Title: Deep Learning Classification With Noisy Labels
- Authors: Guillaume Sanchez, Vincente Guis, Ricard Marxer, Fr\'ed\'eric Bouchara
- Abstract summary: We train face recognition systems for actors identification with a closed set of identities while being exposed to a significant number of perturbators.
We review recent works on how to manage noisy annotations when training deep learning classifiers, independently from our interest in face recognition.
- Score: 1.433758865948252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning systems have shown tremendous accuracy in image classification,
at the cost of big image datasets. Collecting such amounts of data can lead to
labelling errors in the training set. Indexing multimedia content for
retrieval, classification or recommendation can involve tagging or
classification based on multiple criteria. In our case, we train face
recognition systems for actors identification with a closed set of identities
while being exposed to a significant number of perturbators (actors unknown to
our database). Face classifiers are known to be sensitive to label noise. We
review recent works on how to manage noisy annotations when training deep
learning classifiers, independently from our interest in face recognition.
Related papers
- Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition [70.00984078351927]
This paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases.
We propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise.
A Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions.
arXiv Detail & Related papers (2023-07-03T09:20:28Z) - Deep Active Learning in the Presence of Label Noise: A Survey [1.8945921149936182]
Deep active learning has emerged as a powerful tool for training deep learning models within a predefined labeling budget.
We discuss the current state of deep active learning in the presence of label noise, highlighting unique approaches, their strengths, and weaknesses.
We propose exploring contrastive learning methods to derive good image representations that can aid in selecting high-value samples for labeling.
arXiv Detail & Related papers (2023-02-22T00:27:39Z) - Deep Image Retrieval is not Robust to Label Noise [0.0]
We show that image retrieval methods are less robust to label noise than image classification ones.
For the first time, we investigate different types of label noise specific to image retrieval tasks.
arXiv Detail & Related papers (2022-05-23T11:04:09Z) - On Guiding Visual Attention with Language Specification [76.08326100891571]
We use high-level language specification as advice for constraining the classification evidence to task-relevant features, instead of distractors.
We show that supervising spatial attention in this way improves performance on classification tasks with biased and noisy data.
arXiv Detail & Related papers (2022-02-17T22:40:19Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - A Survey on Deep Learning with Noisy Labels: How to train your model
when you cannot trust on the annotations? [21.562089974755125]
Several approaches have been proposed to improve the training of deep learning models in the presence of noisy labels.
This paper presents a survey on the main techniques in literature, in which we classify the algorithm in the following groups: robust losses, sample weighting, sample selection, meta-learning, and combined approaches.
arXiv Detail & Related papers (2020-12-05T15:45:20Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Using Unlabeled Data for Increasing Low-Shot Classification Accuracy of
Relevant and Open-Set Irrelevant Images [0.4110108749051655]
In search, exploration, and reconnaissance tasks performed with autonomous ground vehicles, an image classification capability is needed.
We present an open-set low-shot classifier that uses, during its training, a modest number of labeled images for each relevant class.
It is capable of identifying images from the relevant classes, determining when a candidate image is irrelevant, and it can further recognize categories of irrelevant images that were not included in the training.
arXiv Detail & Related papers (2020-10-01T23:11:07Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.