From Zero-Shot to Few-Shot Learning: A Step of Embedding-Aware
Generative Models
- URL: http://arxiv.org/abs/2302.04060v2
- Date: Thu, 9 Feb 2023 11:31:21 GMT
- Title: From Zero-Shot to Few-Shot Learning: A Step of Embedding-Aware
Generative Models
- Authors: Liangjun Feng, Jiancheng Zhao, Chunhui Zhao
- Abstract summary: Embedding-aware generative model (EAGM) addresses the data insufficiency problem for zero-shot learning (ZSL) by constructing a generator between semantic and visual embedding spaces.
We argue that it is time to take a step back and reconsider the embedding-aware generative paradigm.
- Score: 21.603519845525483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding-aware generative model (EAGM) addresses the data insufficiency
problem for zero-shot learning (ZSL) by constructing a generator between
semantic and visual embedding spaces. Thanks to the predefined benchmark and
protocols, the number of proposed EAGMs for ZSL is increasing rapidly. We argue
that it is time to take a step back and reconsider the embedding-aware
generative paradigm. The purpose of this paper is three-fold. First, given the
fact that the current embedding features in benchmark datasets are somehow
out-of-date, we improve the performance of EAGMs for ZSL remarkably with
embarrassedly simple modifications on the embedding features. This is an
important contribution, since the results reveal that the embedding of EAGMs
deserves more attention. Second, we compare and analyze a significant number of
EAGMs in depth. Based on five benchmark datasets, we update the
state-of-the-art results for ZSL and give a strong baseline for few-shot
learning (FSL), including the classic unseen-class few-shot learning (UFSL) and
the more challenging seen-class few-shot learning (SFSL). Finally, a
comprehensive generative model repository, namely, generative any-shot learning
(GASL) repository, is provided, which contains the models, features,
parameters, and settings of EAGMs for ZSL and FSL. Any results in this paper
can be readily reproduced with only one command line based on GASL.
Related papers
- Data-Free Generalized Zero-Shot Learning [45.86614536578522]
We propose a generic framework for data-free zero-shot learning (DFZSL)
Our framework has been evaluated on five commonly used benchmarks for generalized ZSL, as well as 11 benchmarks for the base-to-new ZSL.
arXiv Detail & Related papers (2024-01-28T13:26:47Z) - LETS-GZSL: A Latent Embedding Model for Time Series Generalized Zero
Shot Learning [1.4665304971699262]
We propose a Latent Embedding for Time Series - GZSL (LETS-GZSL) model that can solve the problem of GZSL for time series classification (TSC)
Our framework is able to achieve a harmonic mean value of at least 55% on most datasets except when the number of unseen classes is greater than 3.
arXiv Detail & Related papers (2022-07-25T09:31:22Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Generative Zero-Shot Learning for Semantic Segmentation of 3D Point
Cloud [79.99653758293277]
We present the first generative approach for both Zero-Shot Learning (ZSL) and Generalized ZSL (GZSL) on 3D data.
We show that it reaches or outperforms the state of the art on ModelNet40 classification for both inductive ZSL and inductive GZSL.
Our experiments show that our method outperforms strong baselines, which we additionally propose for this task.
arXiv Detail & Related papers (2021-08-13T13:29:27Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - Contrastive Embedding for Generalized Zero-Shot Learning [22.050109158293402]
Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes.
Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes.
We propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework.
arXiv Detail & Related papers (2021-03-30T08:54:03Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - Generative Replay-based Continual Zero-Shot Learning [7.909034037183046]
We develop a generative replay-based continual ZSL (GRCZSL)
The proposed method endows traditional ZSL to learn from streaming data and acquire new knowledge without forgetting the previous tasks' experience.
The proposed GRZSL method is developed for a single-head setting of continual learning, simulating a real-world problem setting.
arXiv Detail & Related papers (2021-01-22T00:03:34Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning [14.277015352910674]
We propose a generative model that explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss)
Experiments on seven benchmark datasets demonstrate the superiority of the LsrGAN compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-19T01:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.