Task-Prior Conditional Variational Auto-Encoder for Few-Shot Image
Classification
- URL: http://arxiv.org/abs/2205.15014v1
- Date: Mon, 30 May 2022 11:57:57 GMT
- Title: Task-Prior Conditional Variational Auto-Encoder for Few-Shot Image
Classification
- Authors: Zaiyun Yang
- Abstract summary: We propose a Task-Prior Variational Auto-Encoder model named TP-VAE, conditioned on support shots and constrained by a task-level prior regularization.
Our method outperforms the state-of-the-art in a wide range of standard few-shot image classification scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transductive methods always outperform inductive methods in few-shot image
classification scenarios. However, the existing few-shot methods contain a
latent condition: the number of samples in each class is the same, which may be
unrealistic. To cope with those cases where the query shots of each class are
nonuniform (i.e. nonuniform few-shot learning), we propose a Task-Prior
Conditional Variational Auto-Encoder model named TP-VAE, conditioned on support
shots and constrained by a task-level prior regularization. Our method obtains
high performance in the more challenging nonuniform few-shot scenarios.
Moreover, our method outperforms the state-of-the-art in a wide range of
standard few-shot image classification scenarios. Among them, the accuracy of
1-shot increased by about 3\%.
Related papers
- MOWA: Multiple-in-One Image Warping Model [65.73060159073644]
We propose a Multiple-in-One image warping model (named MOWA) in this work.
We mitigate the difficulty of multi-task learning by disentangling the motion estimation at both the region level and pixel level.
To our knowledge, this is the first work that solves multiple practical warping tasks in one single model.
arXiv Detail & Related papers (2024-04-16T16:50:35Z) - Disambiguation of One-Shot Visual Classification Tasks: A Simplex-Based
Approach [8.436437583394998]
We present a strategy which aims at detecting the presence of multiple objects in a given shot.
This strategy is based on identifying the corners of a simplex in a high dimensional space.
We show the ability of the proposed method to slightly, yet statistically significantly, improve accuracy in extreme settings.
arXiv Detail & Related papers (2023-01-16T11:37:05Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - One-Class Meta-Learning: Towards Generalizable Few-Shot Open-Set
Classification [2.28438857884398]
We introduce two independent few-shot one-class classification methods: Meta Binary Cross-Entropy (Meta-BCE) and One-Class Meta-Learning (OCML)
Both methods can augment any existing few-shot learning method without requiring retraining to work in a few-shot multiclass open-set setting without degrading its closed-set performance.
They surpass the state-of-the-art methods in the few-shot multiclass open-set and few-shot one-class tasks.
arXiv Detail & Related papers (2021-09-14T17:52:51Z) - Generalized and Incremental Few-Shot Learning by Explicit Learning and
Calibration without Forgetting [86.56447683502951]
We propose a three-stage framework that allows to explicitly and effectively address these challenges.
We evaluate the proposed framework on four challenging benchmark datasets for image and video few-shot classification.
arXiv Detail & Related papers (2021-08-18T14:21:43Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - Bayesian Few-Shot Classification with One-vs-Each P\'olya-Gamma
Augmented Gaussian Processes [7.6146285961466]
Few-shot classification (FSC) is an important step on the path toward human-like machine learning.
We propose a novel combination of P'olya-Gamma augmentation and the one-vs-each softmax approximation that allows us to efficiently marginalize over functions rather than model parameters.
We demonstrate improved accuracy and uncertainty quantification on both standard few-shot classification benchmarks and few-shot domain transfer tasks.
arXiv Detail & Related papers (2020-07-20T19:10:41Z) - Diverse Image Generation via Self-Conditioned GANs [56.91974064348137]
We train a class-conditional GAN model without using manually annotated class labels.
Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space.
Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them.
arXiv Detail & Related papers (2020-06-18T17:56:03Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.