Few-shot learning via tensor hallucination
- URL: http://arxiv.org/abs/2104.09467v1
- Date: Mon, 19 Apr 2021 17:30:33 GMT
- Title: Few-shot learning via tensor hallucination
- Authors: Michalis Lazarou, Yannis Avrithis, Tania Stathaki
- Abstract summary: Few-shot classification addresses the challenge of classifying examples given only limited labeled data.
We show that using a simple loss function is more than enough for training a feature generator in the few-shot setting.
Our method sets a new state of the art, outperforming more sophisticated few-shot data augmentation methods.
- Score: 17.381648488344222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot classification addresses the challenge of classifying examples given
only limited labeled data. A powerful approach is to go beyond data
augmentation, towards data synthesis. However, most of data
augmentation/synthesis methods for few-shot classification are overly complex
and sophisticated, e.g. training a wGAN with multiple regularizers or training
a network to transfer latent diversities from known to novel classes. We make
two contributions, namely we show that: (1) using a simple loss function is
more than enough for training a feature generator in the few-shot setting; and
(2) learning to generate tensor features instead of vector features is
superior. Extensive experiments on miniImagenet, CUB and CIFAR-FS datasets show
that our method sets a new state of the art, outperforming more sophisticated
few-shot data augmentation methods.
Related papers
- Constructing Sample-to-Class Graph for Few-Shot Class-Incremental
Learning [10.111587226277647]
Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples.
In this paper, we propose a Sample-to-Class (S2C) graph learning method for FSCIL.
arXiv Detail & Related papers (2023-10-31T08:38:14Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Ortho-Shot: Low Displacement Rank Regularization with Data Augmentation
for Few-Shot Learning [23.465747123791772]
In few-shot classification, the primary goal is to learn representations that generalize well for novel classes.
We propose an efficient low displacement rank (LDR) regularization strategy termed Ortho-Shot.
arXiv Detail & Related papers (2021-10-18T14:58:36Z) - Tensor feature hallucination for few-shot learning [17.381648488344222]
Few-shot classification addresses the challenge of classifying examples given limited supervision and limited data.
Previous works on synthetic data generation for few-shot classification focus on exploiting complex models.
We investigate how a simple and straightforward synthetic data generation method can be used effectively.
arXiv Detail & Related papers (2021-06-09T18:25:08Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Improving Deep Hyperspectral Image Classification Performance with
Spectral Unmixing [3.84448093764973]
We propose an abundance-based multi-HSI classification method.
We convert every HSI from the spectral domain to the abundance domain by a dataset-specific autoencoder.
Secondly, the abundance representations from multiple HSIs are collected to form an enlarged dataset.
arXiv Detail & Related papers (2020-04-01T17:14:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.