ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base
Completion
- URL: http://arxiv.org/abs/2212.03091v1
- Date: Tue, 6 Dec 2022 16:02:09 GMT
- Title: ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base
Completion
- Authors: Pei Chen, Wenlin Yao, Hongming Zhang, Xiaoman Pan, Dian Yu, Dong Yu,
and Jianshu Chen
- Abstract summary: Knowledge base completion aims to predict the missing links in knowledge graphs.
Previous KBC tasks mainly focus on the setting where all test entities and relations have appeared in the training set.
We develop a comprehensive benchmark, ZeroKBC, that covers different possible scenarios of zero-shot KBC.
- Score: 54.898479917173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge base completion (KBC) aims to predict the missing links in
knowledge graphs. Previous KBC tasks and approaches mainly focus on the setting
where all test entities and relations have appeared in the training set.
However, there has been limited research on the zero-shot KBC settings, where
we need to deal with unseen entities and relations that emerge in a constantly
growing knowledge base. In this work, we systematically examine different
possible scenarios of zero-shot KBC and develop a comprehensive benchmark,
ZeroKBC, that covers these scenarios with diverse types of knowledge sources.
Our systematic analysis reveals several missing yet important zero-shot KBC
settings. Experimental results show that canonical and state-of-the-art KBC
systems cannot achieve satisfactory performance on this challenging benchmark.
By analyzing the strength and weaknesses of these systems on solving ZeroKBC,
we further present several important observations and promising future
directions.
Related papers
- A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering [17.281005999581865]
Large-scale knowledge bases (KBs) like Freebase and Wikidata house millions of structured knowledge.
Knowledge Base Question Answering (KBQA) provides a user-friendly way to access these valuable KBs via asking natural language questions.
This paper develops KBLLaMA, which follows a learn-then-reason framework to inject new KB knowledge into a large language model for flexible end-to-end KBQA.
arXiv Detail & Related papers (2024-06-20T22:22:41Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Completeness, Recall, and Negation in Open-World Knowledge Bases: A
Survey [15.221057217833492]
We discuss how knowledge about completeness, recall, and negation in KBs can be expressed, extracted, and inferred.
This survey is targeted at two types of audiences: (1) practitioners who are interested in tracking KB quality, focusing extraction efforts, and building quality-aware downstream applications; and (2) data management, knowledge base and semantic web researchers who wish to understand the state of the art of knowledge bases beyond the open-world assumption.
arXiv Detail & Related papers (2023-05-09T12:50:16Z) - CBR-iKB: A Case-Based Reasoning Approach for Question Answering over
Incomplete Knowledge Bases [39.45030211564547]
We propose a case-based reasoning approach, CBR-iKB, for knowledge base question answering (KBQA) with incomplete-KB as our main focus.
By design, CBR-iKB can seamlessly adapt to changes in KBs without any task-specific training or fine-tuning.
Our method achieves 100% accuracy on MetaQA and establishes new state-of-the-art on multiple benchmarks.
arXiv Detail & Related papers (2022-04-18T20:46:41Z) - Knowledge Graph Question Answering Leaderboard: A Community Resource to
Prevent a Replication Crisis [61.740077541531726]
We provide a new central and open leaderboard for any KGQA benchmark dataset as a focal point for the community.
Our analysis highlights existing problems during the evaluation of KGQA systems.
arXiv Detail & Related papers (2022-01-20T13:46:01Z) - Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base
Completion [59.093293389123424]
We show that not all rule-based Knowledge Base Completion models are the same.
We propose two distinct approaches that learn in one case: 1) a mixture of relations and the other 2) a mixture of paths.
When implemented on top of neuro-symbolic AI, which learns rules by extending Boolean logic to real-valued logic, the latter model leads to superior KBC accuracy outperforming state-of-the-art rule-based KBC by 2-10% in terms of mean reciprocal rank.
arXiv Detail & Related papers (2021-09-16T17:54:56Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.