CBR-iKB: A Case-Based Reasoning Approach for Question Answering over
Incomplete Knowledge Bases
- URL: http://arxiv.org/abs/2204.08554v1
- Date: Mon, 18 Apr 2022 20:46:41 GMT
- Title: CBR-iKB: A Case-Based Reasoning Approach for Question Answering over
Incomplete Knowledge Bases
- Authors: Dung Thai, Srinivas Ravishankar, Ibrahim Abdelaziz, Mudit Chaudhary,
Nandana Mihindukulasooriya, Tahira Naseem, Rajarshi Das, Pavan Kapanipathi,
Achille Fokoue, Andrew McCallum
- Abstract summary: We propose a case-based reasoning approach, CBR-iKB, for knowledge base question answering (KBQA) with incomplete-KB as our main focus.
By design, CBR-iKB can seamlessly adapt to changes in KBs without any task-specific training or fine-tuning.
Our method achieves 100% accuracy on MetaQA and establishes new state-of-the-art on multiple benchmarks.
- Score: 39.45030211564547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge bases (KBs) are often incomplete and constantly changing in
practice. Yet, in many question answering applications coupled with knowledge
bases, the sparse nature of KBs is often overlooked. To this end, we propose a
case-based reasoning approach, CBR-iKB, for knowledge base question answering
(KBQA) with incomplete-KB as our main focus. Our method ensembles decisions
from multiple reasoning chains with a novel nonparametric reasoning algorithm.
By design, CBR-iKB can seamlessly adapt to changes in KBs without any
task-specific training or fine-tuning. Our method achieves 100% accuracy on
MetaQA and establishes new state-of-the-art on multiple benchmarks. For
instance, CBR-iKB achieves an accuracy of 70% on WebQSP under the incomplete-KB
setting, outperforming the existing state-of-the-art method by 22.3%.
Related papers
- A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering [17.281005999581865]
Large-scale knowledge bases (KBs) like Freebase and Wikidata house millions of structured knowledge.
Knowledge Base Question Answering (KBQA) provides a user-friendly way to access these valuable KBs via asking natural language questions.
This paper develops KBLLaMA, which follows a learn-then-reason framework to inject new KB knowledge into a large language model for flexible end-to-end KBQA.
arXiv Detail & Related papers (2024-06-20T22:22:41Z) - Two is Better Than One: Answering Complex Questions by Multiple
Knowledge Sources with Generalized Links [31.941956320431217]
We formulate the novel Multi-KB-QA task that leverages the full and partial links among multiple KBs to derive correct answers.
We propose a method for Multi-KB-QA that encodes all link relations in the KB embedding to score and rank candidate answers.
arXiv Detail & Related papers (2023-09-11T02:31:41Z) - FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base
Question Answering [24.394908238940904]
We propose a Fine-to-Coarse Composition framework for KBQA (FC-KBQA) to ensure the generalization ability and executability of the logical expression.
FC-KBQA derives new state-of-the-art performance on GrailQA and WebQSP, and runs 4 times faster than the baseline.
arXiv Detail & Related papers (2023-06-26T14:19:46Z) - QA Is the New KR: Question-Answer Pairs as Knowledge Bases [105.692569000534]
We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB.
Unlike a traditional KB, this information store is well-aligned with common user information needs.
arXiv Detail & Related papers (2022-07-01T19:09:08Z) - Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base
Completion [59.093293389123424]
We show that not all rule-based Knowledge Base Completion models are the same.
We propose two distinct approaches that learn in one case: 1) a mixture of relations and the other 2) a mixture of paths.
When implemented on top of neuro-symbolic AI, which learns rules by extending Boolean logic to real-valued logic, the latter model leads to superior KBC accuracy outperforming state-of-the-art rule-based KBC by 2-10% in terms of mean reciprocal rank.
arXiv Detail & Related papers (2021-09-16T17:54:56Z) - Case-based Reasoning for Natural Language Queries over Knowledge Bases [41.54465521439727]
We propose a neuro-symbolic CBR approach for question answering over large knowledge bases.
CBR-KBQA consists of two modules: a non-parametric memory that stores cases and a parametric model.
We show that CBR-KBQA can effectively derive novel combination of relations not presented in case memory.
arXiv Detail & Related papers (2021-04-18T07:50:31Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases [63.43418760818188]
We release a new large-scale, high-quality dataset with 64,331 questions, GrailQA.
We propose a novel BERT-based KBQA model.
The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
arXiv Detail & Related papers (2020-11-16T06:36:26Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.