Data-driven Meta-set Based Fine-Grained Visual Classification
- URL: http://arxiv.org/abs/2008.02438v1
- Date: Thu, 6 Aug 2020 03:04:16 GMT
- Title: Data-driven Meta-set Based Fine-Grained Visual Classification
- Authors: Chuanyi Zhang, Yazhou Yao, Xiangbo Shu, Zechao Li, Zhenmin Tang, Qi Wu
- Abstract summary: We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
- Score: 61.083706396575295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing fine-grained image datasets typically requires domain-specific
expert knowledge, which is not always available for crowd-sourcing platform
annotators. Accordingly, learning directly from web images becomes an
alternative method for fine-grained visual recognition. However, label noise in
the web training set can severely degrade the model performance. To this end,
we propose a data-driven meta-set based approach to deal with noisy web images
for fine-grained recognition. Specifically, guided by a small amount of clean
meta-set, we train a selection net in a meta-learning manner to distinguish in-
and out-of-distribution noisy images. To further boost the robustness of model,
we also learn a labeling net to correct the labels of in-distribution noisy
data. In this way, our proposed method can alleviate the harmful effects caused
by out-of-distribution noise and properly exploit the in-distribution noisy
samples for training. Extensive experiments on three commonly used fine-grained
datasets demonstrate that our approach is much superior to state-of-the-art
noise-robust methods.
Related papers
- Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Embedding contrastive unsupervised features to cluster in- and
out-of-distribution noise in corrupted image datasets [18.19216557948184]
Using search engines for web image retrieval is a tempting alternative to manual curation when creating an image dataset.
Their main drawback remains the proportion of incorrect (noisy) samples retrieved.
We propose a two stage algorithm starting with a detection step where we use unsupervised contrastive feature learning.
We find that the alignment and uniformity principles of contrastive learning allow OOD samples to be linearly separated from ID samples on the unit hypersphere.
arXiv Detail & Related papers (2022-07-04T16:51:56Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Addressing out-of-distribution label noise in webly-labelled data [8.625286650577134]
Data gathering and annotation using a search engine is a simple alternative to generating a fully human-annotated dataset.
Although web crawling is very time efficient, some of the retrieved images are unavoidably noisy.
Design robust algorithms for training on noisy data gathered from the web is an important research perspective.
arXiv Detail & Related papers (2021-10-26T13:38:50Z) - Distilling effective supervision for robust medical image segmentation
with noisy labels [21.68138582276142]
We propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels.
In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation.
We present an image-level robust learning method to accommodate more information as the complements to pixel-level learning.
arXiv Detail & Related papers (2021-06-21T13:33:38Z) - Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating
Noisy Samples and Utilizing Hard Ones [60.07027312916081]
We propose a novel approach for removing irrelevant samples from real-world web images during training.
Our approach can alleviate the harmful effects of irrelevant noisy web images and hard examples to achieve better performance.
arXiv Detail & Related papers (2021-01-23T03:58:10Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.