ExpertNet: Adversarial Learning and Recovery Against Noisy Labels
- URL: http://arxiv.org/abs/2007.05305v2
- Date: Mon, 13 Jul 2020 09:58:33 GMT
- Title: ExpertNet: Adversarial Learning and Recovery Against Noisy Labels
- Authors: Amirmasoud Ghiassi, Robert Birke, Rui Han, Lydia Y.Chen
- Abstract summary: We propose a novel framework, composed of Amateur and Expert, which iteratively learn from each other.
The trained Amateur and Expert proactively leverage the images and their noisy labels to infer image classes.
Our empirical evaluations on noisy versions of CIFAR-10, CIFAR-100 and real-world data of Clothing1M show that the proposed model can achieve robust classification against a wide range of noise ratios.
- Score: 8.88412854076574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today's available datasets in the wild, e.g., from social media and open
platforms, present tremendous opportunities and challenges for deep learning,
as there is a significant portion of tagged images, but often with noisy, i.e.
erroneous, labels. Recent studies improve the robustness of deep models against
noisy labels without the knowledge of true labels. In this paper, we advocate
to derive a stronger classifier which proactively makes use of the noisy labels
in addition to the original images - turning noisy labels into learning
features. To such an end, we propose a novel framework, ExpertNet, composed of
Amateur and Expert, which iteratively learn from each other. Amateur is a
regular image classifier trained by the feedback of Expert, which imitates how
human experts would correct the predicted labels from Amateur using the noise
pattern learnt from the knowledge of both the noisy and ground truth labels.
The trained Amateur and Expert proactively leverage the images and their noisy
labels to infer image classes. Our empirical evaluations on noisy versions of
CIFAR-10, CIFAR-100 and real-world data of Clothing1M show that the proposed
model can achieve robust classification against a wide range of noise ratios
and with as little as 20-50% training data, compared to state-of-the-art deep
models that solely focus on distilling the impact of noisy labels.
Related papers
- Learn From All: Erasing Attention Consistency for Noisy Label Facial
Expression Recognition [37.83484824542302]
We explore dealing with noisy labels from a new feature-learning perspective.
We find that FER models remember noisy samples by focusing on a part of the features that can be considered related to the noisy labels.
Inspired by that, we propose a novel Erasing Attention Consistency (EAC) method to suppress the noisy samples.
arXiv Detail & Related papers (2022-07-21T04:30:33Z) - Deep Image Retrieval is not Robust to Label Noise [0.0]
We show that image retrieval methods are less robust to label noise than image classification ones.
For the first time, we investigate different types of label noise specific to image retrieval tasks.
arXiv Detail & Related papers (2022-05-23T11:04:09Z) - Learning with Noisy Labels Revisited: A Study Using Real-World Human
Annotations [54.400167806154535]
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N)
We show that real-world noisy labels follow an instance-dependent pattern rather than the classically adopted class-dependent ones.
arXiv Detail & Related papers (2021-10-22T22:42:11Z) - Learning to Aggregate and Refine Noisy Labels for Visual Sentiment
Analysis [69.48582264712854]
We propose a robust learning method to perform robust visual sentiment analysis.
Our method relies on an external memory to aggregate and filter noisy labels during training.
We establish a benchmark for visual sentiment analysis with label noise using publicly available datasets.
arXiv Detail & Related papers (2021-09-15T18:18:28Z) - Distilling effective supervision for robust medical image segmentation
with noisy labels [21.68138582276142]
We propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels.
In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation.
We present an image-level robust learning method to accommodate more information as the complements to pixel-level learning.
arXiv Detail & Related papers (2021-06-21T13:33:38Z) - Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial
Awareness [2.1930130356902207]
We propose a principled model of label noise that generalizes instance-dependent noise to multiple labelers.
Under our labeler-dependent model, label noise manifests itself under two modalities: natural error of good-faith labelers, and adversarial labels provided by malicious actors.
We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings.
arXiv Detail & Related papers (2021-05-28T19:58:18Z) - Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating
Noisy Samples and Utilizing Hard Ones [60.07027312916081]
We propose a novel approach for removing irrelevant samples from real-world web images during training.
Our approach can alleviate the harmful effects of irrelevant noisy web images and hard examples to achieve better performance.
arXiv Detail & Related papers (2021-01-23T03:58:10Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.