Few-Shot Learning with Intra-Class Knowledge Transfer
- URL: http://arxiv.org/abs/2008.09892v1
- Date: Sat, 22 Aug 2020 18:15:38 GMT
- Title: Few-Shot Learning with Intra-Class Knowledge Transfer
- Authors: Vivek Roy, Yan Xu, Yu-Xiong Wang, Kris Kitani, Ruslan Salakhutdinov,
and Martial Hebert
- Abstract summary: We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
- Score: 100.87659529592223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the few-shot classification task with an unbalanced dataset, in
which some classes have sufficient training samples while other classes only
have limited training samples. Recent works have proposed to solve this task by
augmenting the training data of the few-shot classes using generative models
with the few-shot training samples as the seeds. However, due to the limited
number of the few-shot seeds, the generated samples usually have small
diversity, making it difficult to train a discriminative classifier for the
few-shot classes. To enrich the diversity of the generated samples, we propose
to leverage the intra-class knowledge from the neighbor many-shot classes with
the intuition that neighbor classes share similar statistical information. Such
intra-class information is obtained with a two-step mechanism. First, a
regressor trained only on the many-shot classes is used to evaluate the
few-shot class means from only a few samples. Second, superclasses are
clustered, and the statistical mean and feature variance of each superclass are
used as transferable knowledge inherited by the children few-shot classes. Such
knowledge is then used by a generator to augment the sparse training data to
help the downstream classification tasks. Extensive experiments show that our
method achieves state-of-the-art across different datasets and $n$-shot
settings.
Related papers
- Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Calibrating Higher-Order Statistics for Few-Shot Class-Incremental Learning with Pre-trained Vision Transformers [12.590571371294729]
Few-shot class-incremental learning (FSCIL) aims to adapt the model to new classes from very few data (5 samples) without forgetting the previously learned classes.
Recent works in many-shot CIL (MSCIL) exploited pre-trained models to reduce forgetting and achieve better plasticity.
We use ViT models pre-trained on large-scale datasets for few-shot settings, which face the critical issue of low plasticity.
arXiv Detail & Related papers (2024-04-09T21:12:31Z) - Less is More: On the Feature Redundancy of Pretrained Models When
Transferring to Few-shot Tasks [120.23328563831704]
Transferring a pretrained model to a downstream task can be as easy as conducting linear probing with target data.
We show that, for linear probing, the pretrained features can be extremely redundant when the downstream data is scarce.
arXiv Detail & Related papers (2023-10-05T19:00:49Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - GDC- Generalized Distribution Calibration for Few-Shot Learning [5.076419064097734]
Few shot learning is an important problem in machine learning as large labelled datasets take considerable time and effort to assemble.
Most few-shot learning algorithms suffer from one of two limitations- they either require the design of sophisticated models and loss functions, thus hampering interpretability.
We propose a Generalized sampling method that learns to estimate few-shot distributions for classification as weighted random variables of all large classes.
arXiv Detail & Related papers (2022-04-11T16:22:53Z) - Multi-Class Classification from Single-Class Data with Confidences [90.48669386745361]
We propose an empirical risk minimization framework that is loss-/model-/optimizer-independent.
We show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy.
arXiv Detail & Related papers (2021-06-16T15:38:13Z) - Are Fewer Labels Possible for Few-shot Learning? [81.89996465197392]
Few-shot learning is challenging due to its very limited data and labels.
Recent studies in big transfer (BiT) show that few-shot learning can greatly benefit from pretraining on large scale labeled dataset in a different domain.
We propose eigen-finetuning to enable fewer shot learning by leveraging the co-evolution of clustering and eigen-samples in the finetuning.
arXiv Detail & Related papers (2020-12-10T18:59:29Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Cooperative Bi-path Metric for Few-shot Learning [50.98891758059389]
We make two contributions to investigate the few-shot classification problem.
We report a simple and effective baseline trained on base classes in the way of traditional supervised learning.
We propose a cooperative bi-path metric for classification, which leverages the correlations between base classes and novel classes to further improve the accuracy.
arXiv Detail & Related papers (2020-08-10T11:28:52Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.