Few-shot Image Classification: Just Use a Library of Pre-trained Feature
Extractors and a Simple Classifier
- URL: http://arxiv.org/abs/2101.00562v1
- Date: Sun, 3 Jan 2021 05:30:36 GMT
- Title: Few-shot Image Classification: Just Use a Library of Pre-trained Feature
Extractors and a Simple Classifier
- Authors: Arkabandhu Chowdhury, Mingchao Jiang, Chris Jermaine
- Abstract summary: We show that a library of pre-trained feature extractors combined with a simple feed-forward network learned with an L2-regularizer can be an excellent option for solving cross-domain few-shot image classification.
Our experimental results suggest that this simpler sample-efficient approach far outperforms several well-established meta-learning algorithms on a variety of few-shot tasks.
- Score: 5.782827425991282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent papers have suggested that transfer learning can outperform
sophisticated meta-learning methods for few-shot image classification. We take
this hypothesis to its logical conclusion, and suggest the use of an ensemble
of high-quality, pre-trained feature extractors for few-shot image
classification. We show experimentally that a library of pre-trained feature
extractors combined with a simple feed-forward network learned with an
L2-regularizer can be an excellent option for solving cross-domain few-shot
image classification. Our experimental results suggest that this simpler
sample-efficient approach far outperforms several well-established
meta-learning algorithms on a variety of few-shot tasks.
Related papers
- Advancing Image Retrieval with Few-Shot Learning and Relevance Feedback [5.770351255180495]
Image Retrieval with Relevance Feedback (IRRF) involves iterative human interaction during the retrieval process.
We propose a new scheme based on a hyper-network, that is tailored to the task and facilitates swift adjustment to user feedback.
We show that our method can attain SoTA results in few-shot one-class classification and reach comparable results in binary classification task of few-shot open-set recognition.
arXiv Detail & Related papers (2023-12-18T10:20:28Z) - PrototypeFormer: Learning to Explore Prototype Relationships for
Few-shot Image Classification [19.93681871684493]
We propose our method called PrototypeFormer, which aims to significantly advance traditional few-shot image classification approaches.
We utilize a transformer architecture to build a prototype extraction module, aiming to extract class representations that are more discriminative for few-shot classification.
Despite its simplicity, the method performs remarkably well, with no bells and whistles.
arXiv Detail & Related papers (2023-10-05T12:56:34Z) - Disambiguation of One-Shot Visual Classification Tasks: A Simplex-Based
Approach [8.436437583394998]
We present a strategy which aims at detecting the presence of multiple objects in a given shot.
This strategy is based on identifying the corners of a simplex in a high dimensional space.
We show the ability of the proposed method to slightly, yet statistically significantly, improve accuracy in extreme settings.
arXiv Detail & Related papers (2023-01-16T11:37:05Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Matching Feature Sets for Few-Shot Image Classification [22.84472344406448]
We argue that a set-based representation intrinsically builds a richer representation of images from the base classes.
Our approach, dubbed SetFeat, embeds shallow self-attention mechanisms inside existing encoder architectures.
arXiv Detail & Related papers (2022-04-02T22:42:54Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.