Beyond Seen Data: Improving KBQA Generalization Through Schema-Guided Logical Form Generation
- URL: http://arxiv.org/abs/2502.12737v2
- Date: Wed, 19 Feb 2025 05:32:40 GMT
- Title: Beyond Seen Data: Improving KBQA Generalization Through Schema-Guided Logical Form Generation
- Authors: Shengxiang Gao, Jey Han Lau, Jianzhong Qi,
- Abstract summary: Knowledge base question answering (KBQA) aims to answer user questions in natural language using rich human knowledge stored in large KBs.
Current KBQA methods struggle with unseen knowledge base elements at test time.
We introduce SG-KBQA: a novel model that injects schema contexts into entity retrieval and logical form generation.
- Score: 36.01290160488469
- License:
- Abstract: Knowledge base question answering (KBQA) aims to answer user questions in natural language using rich human knowledge stored in large KBs. As current KBQA methods struggle with unseen knowledge base elements at test time,we introduce SG-KBQA: a novel model that injects schema contexts into entity retrieval and logical form generation to tackle this issue. It uses the richer semantics and awareness of the knowledge base structure provided by schema contexts to enhance generalizability. We show that SG-KBQA achieves strong generalizability, outperforming state-of-the-art models on two commonly used benchmark datasets across a variety of test settings. Our source code is available at https://github.com/gaosx2000/SG_KBQA.
Related papers
- A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering [17.281005999581865]
Large-scale knowledge bases (KBs) like Freebase and Wikidata house millions of structured knowledge.
Knowledge Base Question Answering (KBQA) provides a user-friendly way to access these valuable KBs via asking natural language questions.
This paper develops KBLLaMA, which follows a learn-then-reason framework to inject new KB knowledge into a large language model for flexible end-to-end KBQA.
arXiv Detail & Related papers (2024-06-20T22:22:41Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Few-shot In-context Learning for Knowledge Base Question Answering [31.73274700847965]
We propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks.
The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations.
arXiv Detail & Related papers (2023-05-02T19:31:55Z) - Uni-Parser: Unified Semantic Parser for Question Answering on Knowledge
Base and Database [86.03294330305097]
We propose a unified semantic element for question answering (QA) on both knowledge bases (KB) and databases (DB)
We introduce the primitive (relation and entity in KB, table name, column name and cell value in DB) as an essential element in our framework.
We leverage the generator to predict final logical forms by altering and composing topranked primitives with different operations.
arXiv Detail & Related papers (2022-11-09T19:33:27Z) - A Two-Stage Approach towards Generalization in Knowledge Base Question
Answering [4.802205743713997]
We introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction.
Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG)
arXiv Detail & Related papers (2021-11-10T17:45:33Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering [57.94658176442027]
We present RnG-KBQA, a Rank-and-Generate approach for KBQA.
We achieve new state-of-the-art results on GrailQA and WebQSP datasets.
arXiv Detail & Related papers (2021-09-17T17:58:28Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases [63.43418760818188]
We release a new large-scale, high-quality dataset with 64,331 questions, GrailQA.
We propose a novel BERT-based KBQA model.
The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
arXiv Detail & Related papers (2020-11-16T06:36:26Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.