Cross Knowledge-based Generative Zero-Shot Learning Approach with
Taxonomy Regularization
- URL: http://arxiv.org/abs/2101.09892v1
- Date: Mon, 25 Jan 2021 04:38:18 GMT
- Title: Cross Knowledge-based Generative Zero-Shot Learning Approach with
Taxonomy Regularization
- Authors: Cheng Xie, Hongxin Xiang, Ting Zeng, Yun Yang, Beibei Yu and Qing Liu
- Abstract summary: We develop a generative network-based ZSL approach equipped with the proposed Cross Knowledge Learning (CKL) scheme and Taxonomy Regularization (TR)
CKL enables more relevant semantic features to be trained for semantic-to-visual feature embedding in ZSL.
TR significantly improves the intersections with unseen images with more generalized visual features generated from generative network.
- Score: 5.280368849852332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although zero-shot learning (ZSL) has an inferential capability of
recognizing new classes that have never been seen before, it always faces two
fundamental challenges of the cross modality and crossdomain challenges. In
order to alleviate these problems, we develop a generative network-based ZSL
approach equipped with the proposed Cross Knowledge Learning (CKL) scheme and
Taxonomy Regularization (TR). In our approach, the semantic features are taken
as inputs, and the output is the synthesized visual features generated from the
corresponding semantic features. CKL enables more relevant semantic features to
be trained for semantic-to-visual feature embedding in ZSL, while Taxonomy
Regularization (TR) significantly improves the intersections with unseen images
with more generalized visual features generated from generative network.
Extensive experiments on several benchmark datasets (i.e., AwA1, AwA2, CUB, NAB
and aPY) show that our approach is superior to these state-of-the-art methods
in terms of ZSL image classification and retrieval.
Related papers
- CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning [48.46511584490582]
Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories.
Real-world challenges such as distribution imbalances and attribute co-occurrence hinder the discernment of local variances in images.
We propose a bidirectional cross-modal ZSL approach CREST to overcome these challenges.
arXiv Detail & Related papers (2024-04-15T10:19:39Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Learning Aligned Cross-Modal Representation for Generalized Zero-Shot
Classification [17.177622259867515]
We propose an innovative autoencoder network by learning Aligned Cross-Modal Representations (dubbed ACMR) for Generalized Zero-Shot Classification (GZSC)
Specifically, we propose a novel Vision-Semantic Alignment (VSA) method to strengthen the alignment of cross-modal latent features on the latent subspaces guided by a learned classifier.
In addition, we propose a novel Information Enhancement Module (IEM) to reduce the possibility of latent variables collapse meanwhile encouraging the discriminative ability of latent variables.
arXiv Detail & Related papers (2021-12-24T03:35:37Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Attribute-Modulated Generative Meta Learning for Zero-Shot
Classification [52.64680991682722]
We present the Attribute-Modulated generAtive meta-model for Zero-shot learning (AMAZ)
Our model consists of an attribute-aware modulation network and an attribute-augmented generative network.
Our empirical evaluations show that AMAZ improves state-of-the-art methods by 3.8% and 5.1% in ZSL and generalized ZSL settings, respectively.
arXiv Detail & Related papers (2021-04-22T04:16:43Z) - Zero-Shot Learning Based on Knowledge Sharing [0.0]
Zero-Shot Learning (ZSL) is an emerging research that aims to solve the classification problems with very few training data.
This paper introduces knowledge sharing (KS) to enrich the representation of semantic features.
Based on KS, we apply a generative adversarial network to generate pseudo visual features from semantic features that are very close to the real visual features.
arXiv Detail & Related papers (2021-02-26T06:43:29Z) - Multi-Knowledge Fusion for New Feature Generation in Generalized
Zero-Shot Learning [4.241513887019675]
We propose a novel generative ZSL method to learn more generalized features from multi-knowledge with continuously generated new semantics in semantic-to-visual embedding.
We show that our approach can achieve significantly better performance compared to existing state-of-the-art methods on a large number of benchmarks for several ZSL tasks.
arXiv Detail & Related papers (2021-02-23T09:11:05Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning [14.277015352910674]
We propose a generative model that explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss)
Experiments on seven benchmark datasets demonstrate the superiority of the LsrGAN compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-19T01:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.