Invertible Zero-Shot Recognition Flows
- URL: http://arxiv.org/abs/2007.04873v1
- Date: Thu, 9 Jul 2020 15:21:28 GMT
- Title: Invertible Zero-Shot Recognition Flows
- Authors: Yuming Shen, Jie Qin, Lei Huang
- Abstract summary: This work incorporates a new family of generative models (i.e., flow-based models) into Zero-Shot Learning (ZSL)
The proposed Invertible Zero-shot Flow (IZF) learns factorized data embeddings with the forward pass of an invertible flow network, while the reverse pass generates data samples.
Experiments on widely-adopted ZSL benchmarks demonstrate the significant performance gain of IZF over existing methods.
- Score: 42.839333265321905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models have been successfully applied to Zero-Shot Learning
(ZSL) recently. However, the underlying drawbacks of GANs and VAEs (e.g., the
hardness of training with ZSL-oriented regularizers and the limited generation
quality) hinder the existing generative ZSL models from fully bypassing the
seen-unseen bias. To tackle the above limitations, for the first time, this
work incorporates a new family of generative models (i.e., flow-based models)
into ZSL. The proposed Invertible Zero-shot Flow (IZF) learns factorized data
embeddings (i.e., the semantic factors and the non-semantic ones) with the
forward pass of an invertible flow network, while the reverse pass generates
data samples. This procedure theoretically extends conventional generative
flows to a factorized conditional scheme. To explicitly solve the bias problem,
our model enlarges the seen-unseen distributional discrepancy based on negative
sample-based distance measurement. Notably, IZF works flexibly with either a
naive Bayesian classifier or a held-out trainable one for zero-shot
recognition. Experiments on widely-adopted ZSL benchmarks demonstrate the
significant performance gain of IZF over existing methods, in both classic and
generalized settings.
Related papers
- Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - Generative Replay-based Continual Zero-Shot Learning [7.909034037183046]
We develop a generative replay-based continual ZSL (GRCZSL)
The proposed method endows traditional ZSL to learn from streaming data and acquire new knowledge without forgetting the previous tasks' experience.
The proposed GRZSL method is developed for a single-head setting of continual learning, simulating a real-world problem setting.
arXiv Detail & Related papers (2021-01-22T00:03:34Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.