K-ZSL: Resources for Knowledge-driven Zero-shot Learning
- URL: http://arxiv.org/abs/2106.15047v1
- Date: Tue, 29 Jun 2021 01:22:49 GMT
- Title: K-ZSL: Resources for Knowledge-driven Zero-shot Learning
- Authors: Yuxia Geng, Jiaoyan Chen, Zhuo Chen, Jeff Z. Pan, Zonggang Yuan,
Huajun Chen
- Abstract summary: External knowledge (a.k.a side information) plays a critical role in zero-shot learning (ZSL)
In this paper, we proposed 5 resources for KG-based research in zero-shot image classification (ZS-IMGC) and zero-shot KG completion (ZS-KGC)
For each resource, we contributed a benchmark and its KG with semantics ranging from text to attributes, from relational knowledge to logical expressions.
- Score: 19.142028501513362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: External knowledge (a.k.a side information) plays a critical role in
zero-shot learning (ZSL) which aims to predict with unseen classes that have
never appeared in training data. Several kinds of external knowledge such as
text and attribute have been widely investigated, but they alone are limited
with incomplete semantics. Therefore, some very recent studies propose to use
Knowledge Graph (KG) due to its high expressivity and compatibility for
representing kinds of knowledge. However, the ZSL community is still short of
standard benchmarks for studying and comparing different KG-based ZSL methods.
In this paper, we proposed 5 resources for KG-based research in zero-shot image
classification (ZS-IMGC) and zero-shot KG completion (ZS-KGC). For each
resource, we contributed a benchmark and its KG with semantics ranging from
text to attributes, from relational knowledge to logical expressions. We have
clearly presented how the resources are constructed, their statistics and
formats, and how they can be utilized with cases in evaluating ZSL methods'
performance and explanations. Our resources are available at
https://github.com/China-UK-ZSL/Resources_for_KZSL.
Related papers
- Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Data-Free Generalized Zero-Shot Learning [45.86614536578522]
We propose a generic framework for data-free zero-shot learning (DFZSL)
Our framework has been evaluated on five commonly used benchmarks for generalized ZSL, as well as 11 benchmarks for the base-to-new ZSL.
arXiv Detail & Related papers (2024-01-28T13:26:47Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Prompting Language-Informed Distribution for Compositional Zero-Shot Learning [73.49852821602057]
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts.
We propose a model by prompting the language-informed distribution, aka., PLID, for the task.
Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.
arXiv Detail & Related papers (2023-05-23T18:00:22Z) - Disentangled Ontology Embedding for Zero-shot Learning [39.014714187825646]
Knowledge Graph (KG) and its variant of ontology have been widely used for knowledge representation, and have shown to be quite effective in augmenting Zero-shot Learning (ZSL)
Existing ZSL methods that utilize KGs all neglect the complexity of inter-class relationships represented in KGs.
In this paper, we focus on for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by semantic properties.
We also contribute a new ZSL framework named DOZSL, which contains two new ZSL solutions based on generative models and graph propagation models.
arXiv Detail & Related papers (2022-06-08T08:29:30Z) - KG-SP: Knowledge Guided Simple Primitives for Open World Compositional
Zero-Shot Learning [52.422873819371276]
The goal of open-world compositional zero-shot learning (OW-CZSL) is to recognize compositions of state and objects in images.
Here, we revisit a simple CZSL baseline and predict the primitives, i.e. states and objects, independently.
We estimate the feasibility of each composition through external knowledge, using this prior to remove unfeasible compositions from the output space.
Our model, Knowledge-Guided Simple Primitives (KG-SP), achieves state of the art in both OW-CZSL and pCZSL.
arXiv Detail & Related papers (2022-05-13T17:18:15Z) - Low-resource Learning with Knowledge Graphs: A Comprehensive Survey [34.18863318808325]
Machine learning methods often rely on a number of labeled samples for training.
Low-resource learning aims to learn robust prediction models with no enough resources.
Knowledge Graph (KG) is becoming more and more popular for knowledge representation.
arXiv Detail & Related papers (2021-12-18T21:40:50Z) - LeBenchmark: A Reproducible Framework for Assessing Self-Supervised
Representation Learning from Speech [63.84741259993937]
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image and natural language processing.
Recent works also investigated SSL from speech.
We propose LeBenchmark: a reproducible framework for assessing SSL from speech.
arXiv Detail & Related papers (2021-04-23T08:27:09Z) - OntoZSL: Ontology-enhanced Zero-shot Learning [19.87808305218359]
Key to implementing Zero-shot Learning (ZSL) is to leverage the prior knowledge of classes which builds the semantic relationship between classes.
In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL.
To address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs)
arXiv Detail & Related papers (2021-02-15T04:39:58Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z) - Generative Adversarial Zero-shot Learning via Knowledge Graphs [32.42721467499858]
We introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs.
Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view.
With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes.
arXiv Detail & Related papers (2020-04-07T03:55:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.