LibFewShot: A Comprehensive Library for Few-shot Learning
- URL: http://arxiv.org/abs/2109.04898v1
- Date: Fri, 10 Sep 2021 14:12:37 GMT
- Title: LibFewShot: A Comprehensive Library for Few-shot Learning
- Authors: Wenbin Li, Chuanqi Dong, Pinzhuo Tian, Tiexin Qin, Xuesong Yang, Ziyi
Wang, Jing Huo, Yinghuan Shi, Lei Wang, Yang Gao and Jiebo Luo
- Abstract summary: Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years.
Some recent studies implicitly show that many generic techniques or tricks, such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method.
We propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing seventeen state-of-the-art few-shot learning methods in a unified framework with the same single intrinsic in PyTorch.
- Score: 78.58842209282724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning, especially few-shot image classification, has received
increasing attention and witnessed significant advances in recent years. Some
recent studies implicitly show that many generic techniques or ``tricks'', such
as data augmentation, pre-training, knowledge distillation, and
self-supervision, may greatly boost the performance of a few-shot learning
method. Moreover, different works may employ different software platforms,
different training schedules, different backbone architectures and even
different input image sizes, making fair comparisons difficult and
practitioners struggle with reproducibility. To address these situations, we
propose a comprehensive library for few-shot learning (LibFewShot) by
re-implementing seventeen state-of-the-art few-shot learning methods in a
unified framework with the same single codebase in PyTorch. Furthermore, based
on LibFewShot, we provide comprehensive evaluations on multiple benchmark
datasets with multiple backbone architectures to evaluate common pitfalls and
effects of different training tricks. In addition, given the recent doubts on
the necessity of meta- or episodic-training mechanism, our evaluation results
show that such kind of mechanism is still necessary especially when combined
with pre-training. We hope our work can not only lower the barriers for
beginners to work on few-shot learning but also remove the effects of the
nontrivial tricks to facilitate intrinsic research on few-shot learning. The
source code is available from https://github.com/RL-VIG/LibFewShot.
Related papers
- Collaboration of Pre-trained Models Makes Better Few-shot Learner [49.89134194181042]
Few-shot classification requires deep neural networks to learn generalized representations only from limited training images.
Recently, CLIP-based methods have shown promising few-shot performance benefited from the contrastive language-image pre-training.
We propose CoMo, a Collaboration of pre-trained Models that incorporates diverse prior knowledge from various pre-training paradigms for better few-shot learning.
arXiv Detail & Related papers (2022-09-25T16:23:12Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition [38.49419948988415]
Deep networks can learn to accurately recognize objects of a category by training on a large number of images.
A meta-learning challenge known as a low-shot image recognition task comes when only a few images with annotations are available for learning a recognition model for one category.
Our method, called Cascaded Feature Matching Network (CFMN), is proposed to solve this problem.
Experiments for few-shot learning on two standard datasets, emphminiImageNet and Omniglot, have confirmed the effectiveness of our method.
arXiv Detail & Related papers (2021-01-13T11:37:28Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Complementing Representation Deficiency in Few-shot Image
Classification: A Meta-Learning Approach [27.350615059290348]
We propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification.
In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency.
Our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets.
arXiv Detail & Related papers (2020-07-21T13:25:54Z) - Self-Augmentation: Generalizing Deep Networks to Unseen Classes for
Few-Shot Learning [21.3564383157159]
Few-shot learning aims to classify unseen classes with a few training examples.
We propose self-augmentation that consolidates self-mix and self-distillation.
We present a local learner representation to further exploit a few training examples for unseen classes.
arXiv Detail & Related papers (2020-04-01T06:39:08Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.