A Two-Stage Approach towards Generalization in Knowledge Base Question
Answering
- URL: http://arxiv.org/abs/2111.05825v1
- Date: Wed, 10 Nov 2021 17:45:33 GMT
- Title: A Two-Stage Approach towards Generalization in Knowledge Base Question
Answering
- Authors: Srinivas Ravishankar, June Thai, Ibrahim Abdelaziz, Nandana
Mihidukulasooriya, Tahira Naseem, Pavan Kapanipathi, Gaetano Rossilleo,
Achille Fokoue
- Abstract summary: We introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction.
Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG)
- Score: 4.802205743713997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing approaches for Knowledge Base Question Answering (KBQA) focus
on a specific underlying knowledge base either because of inherent assumptions
in the approach, or because evaluating it on a different knowledge base
requires non-trivial changes. However, many popular knowledge bases share
similarities in their underlying schemas that can be leveraged to facilitate
generalization across knowledge bases. To achieve this generalization, we
introduce a KBQA framework based on a 2-stage architecture that explicitly
separates semantic parsing from the knowledge base interaction, facilitating
transfer learning across datasets and knowledge graphs. We show that
pretraining on datasets with a different underlying knowledge base can
nevertheless provide significant performance gains and reduce sample
complexity. Our approach achieves comparable or state-of-the-art performance
for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA
(Wikimovies-KG).
Related papers
- A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering [17.281005999581865]
Large-scale knowledge bases (KBs) like Freebase and Wikidata house millions of structured knowledge.
Knowledge Base Question Answering (KBQA) provides a user-friendly way to access these valuable KBs via asking natural language questions.
This paper develops KBLLaMA, which follows a learn-then-reason framework to inject new KB knowledge into a large language model for flexible end-to-end KBQA.
arXiv Detail & Related papers (2024-06-20T22:22:41Z) - Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization [67.92796510359595]
Open-domain Question Answering (OpenQA) aims at answering factual questions with an external large-scale knowledge corpus.
It is still unclear how well an OpenQA model can transfer to completely new knowledge domains.
We introduce Corpus-Invariant Tuning (CIT), a simple but effective training strategy, to mitigate the knowledge over-memorization.
arXiv Detail & Related papers (2024-04-02T05:44:50Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer [65.42096702428347]
Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
arXiv Detail & Related papers (2023-08-18T12:14:51Z) - Few-shot In-context Learning for Knowledge Base Question Answering [31.73274700847965]
We propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks.
The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations.
arXiv Detail & Related papers (2023-05-02T19:31:55Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.