Disentangled Ontology Embedding for Zero-shot Learning
- URL: http://arxiv.org/abs/2206.03739v1
- Date: Wed, 8 Jun 2022 08:29:30 GMT
- Title: Disentangled Ontology Embedding for Zero-shot Learning
- Authors: Yuxia Geng, Jiaoyan Chen, Wen Zhang, Yajing Xu, Zhuo Chen, Jeff Z.
Pan, Yufeng Huang, Feiyu Xiong, Huajun Chen
- Abstract summary: Knowledge Graph (KG) and its variant of ontology have been widely used for knowledge representation, and have shown to be quite effective in augmenting Zero-shot Learning (ZSL)
Existing ZSL methods that utilize KGs all neglect the complexity of inter-class relationships represented in KGs.
In this paper, we focus on for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by semantic properties.
We also contribute a new ZSL framework named DOZSL, which contains two new ZSL solutions based on generative models and graph propagation models.
- Score: 39.014714187825646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph (KG) and its variant of ontology have been widely used for
knowledge representation, and have shown to be quite effective in augmenting
Zero-shot Learning (ZSL). However, existing ZSL methods that utilize KGs all
neglect the intrinsic complexity of inter-class relationships represented in
KGs. One typical feature is that a class is often related to other classes in
different semantic aspects. In this paper, we focus on ontologies for
augmenting ZSL, and propose to learn disentangled ontology embeddings guided by
ontology properties to capture and utilize more fine-grained class
relationships in different aspects. We also contribute a new ZSL framework
named DOZSL, which contains two new ZSL solutions based on generative models
and graph propagation models, respectively, for effectively utilizing the
disentangled ontology embeddings. Extensive evaluations have been conducted on
five benchmarks across zero-shot image classification (ZS-IMGC) and zero-shot
KG completion (ZS-KGC). DOZSL often achieves better performance than the
state-of-the-art, and its components have been verified by ablation studies and
case studies. Our codes and datasets are available at
https://github.com/zjukg/DOZSL.
Related papers
- Mutual Balancing in State-Object Components for Compositional Zero-Shot
Learning [0.0]
Compositional Zero-Shot Learning (CZSL) aims to recognize unseen compositions from seen states and objects.
We propose a novel method called MUtual balancing in STate-object components (MUST) for CZSL, which provides a balancing inductive bias for the model.
Our approach significantly outperforms the state-of-the-art on MIT-States, UT-Zappos, and C-GQA when combined with the basic CZSL frameworks.
arXiv Detail & Related papers (2022-11-19T10:21:22Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Contrastive Embedding for Generalized Zero-Shot Learning [22.050109158293402]
Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes.
Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes.
We propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework.
arXiv Detail & Related papers (2021-03-30T08:54:03Z) - Goal-Oriented Gaze Estimation for Zero-Shot Learning [62.52340838817908]
We introduce a novel goal-oriented gaze estimation module (GEM) to improve the discriminative attribute localization.
We aim to predict the actual human gaze location to get the visual attention regions for recognizing a novel object guided by attribute description.
This work implies the promising benefits of collecting human gaze dataset and automatic gaze estimation algorithms on high-level computer vision tasks.
arXiv Detail & Related papers (2021-03-05T02:14:57Z) - OntoZSL: Ontology-enhanced Zero-shot Learning [19.87808305218359]
Key to implementing Zero-shot Learning (ZSL) is to leverage the prior knowledge of classes which builds the semantic relationship between classes.
In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL.
To address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs)
arXiv Detail & Related papers (2021-02-15T04:39:58Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Leveraging Seen and Unseen Semantic Relationships for Generative
Zero-Shot Learning [14.277015352910674]
We propose a generative model that explicitly performs knowledge transfer by incorporating a novel Semantic Regularized Loss (SR-Loss)
Experiments on seven benchmark datasets demonstrate the superiority of the LsrGAN compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-19T01:25:53Z) - Generative Adversarial Zero-shot Learning via Knowledge Graphs [32.42721467499858]
We introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs.
Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view.
With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes.
arXiv Detail & Related papers (2020-04-07T03:55:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.