A Study on Representation Transfer for Few-Shot Learning
- URL: http://arxiv.org/abs/2209.02073v1
- Date: Mon, 5 Sep 2022 17:56:02 GMT
- Title: A Study on Representation Transfer for Few-Shot Learning
- Authors: Chun-Nam Yu, Yi Xie
- Abstract summary: Few-shot classification aims to learn to classify new object categories well using only a few labeled examples.
In this work we perform a systematic study of various feature representations for few-shot classification.
We find that learning from more complex tasks tend to give better representations for few-shot classification.
- Score: 5.717951523323085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot classification aims to learn to classify new object categories well
using only a few labeled examples. Transferring feature representations from
other models is a popular approach for solving few-shot classification
problems. In this work we perform a systematic study of various feature
representations for few-shot classification, including representations learned
from MAML, supervised classification, and several common self-supervised tasks.
We find that learning from more complex tasks tend to give better
representations for few-shot classification, and thus we propose the use of
representations learned from multiple tasks for few-shot classification.
Coupled with new tricks on feature selection and voting to handle the issue of
small sample size, our direct transfer learning method offers performance
comparable to state-of-art on several benchmark datasets.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Investigating Self-Supervised Methods for Label-Efficient Learning [27.029542823306866]
We study different self supervised pretext tasks, namely contrastive learning, clustering, and masked image modelling for their low-shot capabilities.
We introduce a framework involving both mask image modelling and clustering as pretext tasks, which performs better across all low-shot downstream tasks.
When testing the model on full scale datasets, we show performance gains in multi-class classification, multi-label classification and semantic segmentation.
arXiv Detail & Related papers (2024-06-25T10:56:03Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - ECKPN: Explicit Class Knowledge Propagation Network for Transductive
Few-shot Learning [53.09923823663554]
Class-level knowledge can be easily learned by humans from just a handful of samples.
We propose an Explicit Class Knowledge Propagation Network (ECKPN) to address this problem.
We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-16T02:29:43Z) - Multi-scale Adaptive Task Attention Network for Few-Shot Learning [5.861206243996454]
The goal of few-shot learning is to classify unseen categories with few labeled samples.
This paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning.
arXiv Detail & Related papers (2020-11-30T00:36:01Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.