Boosting few-shot classification with view-learnable contrastive
learning
- URL: http://arxiv.org/abs/2107.09242v1
- Date: Tue, 20 Jul 2021 03:13:33 GMT
- Title: Boosting few-shot classification with view-learnable contrastive
learning
- Authors: Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu
- Abstract summary: We introduce contrastive loss into few-shot classification for learning latent fine-grained structure in the embedding space.
We develop a learning-to-learn algorithm to automatically generate different views of the same image.
- Score: 19.801016732390064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of few-shot classification is to classify new categories with few
labeled examples within each class. Nowadays, the excellent performance in
handling few-shot classification problems is shown by metric-based
meta-learning methods. However, it is very hard for previous methods to
discriminate the fine-grained sub-categories in the embedding space without
fine-grained labels. This may lead to unsatisfactory generalization to
fine-grained subcategories, and thus affects model interpretation. To tackle
this problem, we introduce the contrastive loss into few-shot classification
for learning latent fine-grained structure in the embedding space. Furthermore,
to overcome the drawbacks of random image transformation used in current
contrastive learning in producing noisy and inaccurate image pairs (i.e.,
views), we develop a learning-to-learn algorithm to automatically generate
different views of the same image. Extensive experiments on standard few-shot
learning benchmarks demonstrate the superiority of our method.
Related papers
- Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Improving Classification Accuracy with Graph Filtering [9.153817737157366]
We show that the proposed graph filtering methodology has the effect of reducing intra-class variance, while maintaining the mean.
While our approach applies to all classification problems in general, it is particularly useful in few-shot settings, where intra-class noise can have a huge impact due to the small sample selection.
arXiv Detail & Related papers (2021-01-12T22:51:55Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Rethinking preventing class-collapsing in metric learning with
margin-based losses [81.22825616879936]
Metric learning seeks embeddings where visually similar instances are close and dissimilar instances are apart.
margin-based losses tend to project all samples of a class onto a single point in the embedding space.
We propose a simple modification to the embedding losses such that each sample selects its nearest same-class counterpart in a batch.
arXiv Detail & Related papers (2020-06-09T09:59:25Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Training few-shot classification via the perspective of minibatch and
pretraining [10.007569291231915]
Few-shot classification is a challenging task which aims to formulate the ability of humans to learn concepts from limited prior data.
Recent progress in few-shot classification has featured meta-learning.
We propose multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems.
arXiv Detail & Related papers (2020-04-10T03:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.