Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning
- URL: http://arxiv.org/abs/2007.09549v1
- Date: Sun, 19 Jul 2020 01:25:53 GMT
- Title: Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning
- Authors: Maunil R Vyas, Hemanth Venkateswara, Sethuraman Panchanathan
- Abstract summary: We propose a generative model that explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss)
Experiments on seven benchmark datasets demonstrate the superiority of the LsrGAN compared to previous state-of-the-art approaches.
- Score: 14.277015352910674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-shot learning (ZSL) addresses the unseen class recognition problem by
leveraging semantic information to transfer knowledge from seen classes to
unseen classes. Generative models synthesize the unseen visual features and
convert ZSL into a classical supervised learning problem. These generative
models are trained using the seen classes and are expected to implicitly
transfer the knowledge from seen to unseen classes. However, their performance
is stymied by overfitting, which leads to substandard performance on
Generalized Zero-Shot learning (GZSL). To address this concern, we propose the
novel LsrGAN, a generative model that Leverages the Semantic Relationship
between seen and unseen categories and explicitly performs knowledge transfer
by incorporating a novel Semantic Regularized Loss (SR-Loss). The SR-loss
guides the LsrGAN to generate visual features that mirror the semantic
relationships between seen and unseen classes. Experiments on seven benchmark
datasets, including the challenging Wikipedia text-based CUB and NABirds
splits, and Attribute-based AWA, CUB, and SUN, demonstrates the superiority of
the LsrGAN compared to previous state-of-the-art approaches under both ZSL and
GZSL. Code is available at https: // github. com/ Maunil/ LsrGAN
Related papers
- Prompting Language-Informed Distribution for Compositional Zero-Shot Learning [73.49852821602057]
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts.
We propose a model by prompting the language-informed distribution, aka., PLID, for the task.
Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.
arXiv Detail & Related papers (2023-05-23T18:00:22Z) - Disentangled Ontology Embedding for Zero-shot Learning [39.014714187825646]
Knowledge Graph (KG) and its variant of ontology have been widely used for knowledge representation, and have shown to be quite effective in augmenting Zero-shot Learning (ZSL)
Existing ZSL methods that utilize KGs all neglect the complexity of inter-class relationships represented in KGs.
In this paper, we focus on for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by semantic properties.
We also contribute a new ZSL framework named DOZSL, which contains two new ZSL solutions based on generative models and graph propagation models.
arXiv Detail & Related papers (2022-06-08T08:29:30Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Zero-shot Learning with Class Description Regularization [10.739164530098755]
We introduce a novel form of regularization that encourages generative ZSL models to pay more attention to the description of each category.
Our empirical results demonstrate improvements over the performance of multiple state-of-the-art models on the task of generalized zero-shot recognition and classification.
arXiv Detail & Related papers (2021-06-30T14:56:15Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - OntoZSL: Ontology-enhanced Zero-shot Learning [19.87808305218359]
Key to implementing Zero-shot Learning (ZSL) is to leverage the prior knowledge of classes which builds the semantic relationship between classes.
In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL.
To address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs)
arXiv Detail & Related papers (2021-02-15T04:39:58Z) - Cross Knowledge-based Generative Zero-Shot Learning Approach with
Taxonomy Regularization [5.280368849852332]
We develop a generative network-based ZSL approach equipped with the proposed Cross Knowledge Learning (CKL) scheme and Taxonomy Regularization (TR)
CKL enables more relevant semantic features to be trained for semantic-to-visual feature embedding in ZSL.
TR significantly improves the intersections with unseen images with more generalized visual features generated from generative network.
arXiv Detail & Related papers (2021-01-25T04:38:18Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.