Towards Zero-Shot Learning with Fewer Seen Class Examples
- URL: http://arxiv.org/abs/2011.07279v1
- Date: Sat, 14 Nov 2020 11:58:35 GMT
- Title: Towards Zero-Shot Learning with Fewer Seen Class Examples
- Authors: Vinay Kumar Verma, Ashish Mishra, Anubha Pandey, Hema A. Murthy and
Piyush Rai
- Abstract summary: We present a meta-learning based generative model for zero-shot learning (ZSL)
This setup contrasts with the conventional ZSL approaches, where training typically assumes the availability of a sufficiently large number of training examples from each of the seen classes.
We conduct extensive experiments and ablation studies on four benchmark datasets of ZSL and observe that the proposed model outperforms state-of-the-art approaches by a significant margin when the number of examples per seen class is very small.
- Score: 41.751885300474925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a meta-learning based generative model for zero-shot learning
(ZSL) towards a challenging setting when the number of training examples from
each \emph{seen} class is very few. This setup contrasts with the conventional
ZSL approaches, where training typically assumes the availability of a
sufficiently large number of training examples from each of the seen classes.
The proposed approach leverages meta-learning to train a deep generative model
that integrates variational autoencoder and generative adversarial networks. We
propose a novel task distribution where meta-train and meta-validation classes
are disjoint to simulate the ZSL behaviour in training. Once trained, the model
can generate synthetic examples from seen and unseen classes. Synthesize
samples can then be used to train the ZSL framework in a supervised manner. The
meta-learner enables our model to generates high-fidelity samples using only a
small number of training examples from seen classes. We conduct extensive
experiments and ablation studies on four benchmark datasets of ZSL and observe
that the proposed model outperforms state-of-the-art approaches by a
significant margin when the number of examples per seen class is very small.
Related papers
- Meta-Learned Attribute Self-Interaction Network for Continual and
Generalized Zero-Shot Learning [46.6282595346048]
Zero-shot learning (ZSL) is a promising approach to generalizing a model to unseen categories during training.
We propose a Meta-learned Attribute self-Interaction Network (MAIN) for continual ZSL.
By pairing attribute self-interaction trained using meta-learning with inverse regularization of the attribute encoder, we are able to outperform state-of-the-art results without leveraging the unseen class attributes.
arXiv Detail & Related papers (2023-12-02T16:23:01Z) - Efficient Gaussian Process Model on Class-Imbalanced Datasets for
Generalized Zero-Shot Learning [37.00463358780726]
We propose a Neural Network model that learns a latent feature embedding and a Gaussian Process (GP) regression model that predicts latent feature prototypes of unseen classes.
Our model is trained efficiently with a simple training strategy that mitigates the impact of class-imbalanced training data.
arXiv Detail & Related papers (2022-10-11T04:57:20Z) - An Iterative Co-Training Transductive Framework for Zero Shot Learning [24.401200814880124]
We introduce an iterative co-training framework which contains two different base ZSL models and an exchanging module.
At each iteration, the two different ZSL models are co-trained to separately predict pseudo labels for the unseen-class samples.
Our framework can gradually boost the ZSL performance by fully exploiting the potential complementarity of the two models' classification capabilities.
arXiv Detail & Related papers (2022-03-30T04:08:44Z) - Semantics-driven Attentive Few-shot Learning over Clean and Noisy
Samples [0.0]
We aim to train meta-learner models that can leverage prior semantic knowledge about novel classes to guide the classifier synthesis process.
In particular, we propose semantically-conditioned feature attention and sample attention mechanisms that estimate the importance of representation dimensions and training instances.
arXiv Detail & Related papers (2022-01-09T16:16:23Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.