FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base
Question Answering
- URL: http://arxiv.org/abs/2306.14722v1
- Date: Mon, 26 Jun 2023 14:19:46 GMT
- Title: FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base
Question Answering
- Authors: Lingxi Zhang, Jing Zhang, Yanling Wang, Shulin Cao, Xinmei Huang,
Cuiping Li, Hong Chen, Juanzi Li
- Abstract summary: We propose a Fine-to-Coarse Composition framework for KBQA (FC-KBQA) to ensure the generalization ability and executability of the logical expression.
FC-KBQA derives new state-of-the-art performance on GrailQA and WebQSP, and runs 4 times faster than the baseline.
- Score: 24.394908238940904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The generalization problem on KBQA has drawn considerable attention. Existing
research suffers from the generalization issue brought by the entanglement in
the coarse-grained modeling of the logical expression, or inexecutability
issues due to the fine-grained modeling of disconnected classes and relations
in real KBs. We propose a Fine-to-Coarse Composition framework for KBQA
(FC-KBQA) to both ensure the generalization ability and executability of the
logical expression. The main idea of FC-KBQA is to extract relevant
fine-grained knowledge components from KB and reformulate them into
middle-grained knowledge pairs for generating the final logical expressions.
FC-KBQA derives new state-of-the-art performance on GrailQA and WebQSP, and
runs 4 times faster than the baseline.
Related papers
- A Learn-Then-Reason Model Towards Generalization in Knowledge Base Question Answering [17.281005999581865]
Large-scale knowledge bases (KBs) like Freebase and Wikidata house millions of structured knowledge.
Knowledge Base Question Answering (KBQA) provides a user-friendly way to access these valuable KBs via asking natural language questions.
This paper develops KBLLaMA, which follows a learn-then-reason framework to inject new KB knowledge into a large language model for flexible end-to-end KBQA.
arXiv Detail & Related papers (2024-06-20T22:22:41Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Two is Better Than One: Answering Complex Questions by Multiple
Knowledge Sources with Generalized Links [31.941956320431217]
We formulate the novel Multi-KB-QA task that leverages the full and partial links among multiple KBs to derive correct answers.
We propose a method for Multi-KB-QA that encodes all link relations in the KB embedding to score and rank candidate answers.
arXiv Detail & Related papers (2023-09-11T02:31:41Z) - DecAF: Joint Decoding of Answers and Logical Forms for Question
Answering over Knowledge Bases [81.19499764899359]
We propose a novel framework DecAF that jointly generates both logical forms and direct answers.
DecAF achieves new state-of-the-art accuracy on WebQSP, FreebaseQA, and GrailQA benchmarks.
arXiv Detail & Related papers (2022-09-30T19:51:52Z) - QA Is the New KR: Question-Answer Pairs as Knowledge Bases [105.692569000534]
We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB.
Unlike a traditional KB, this information store is well-aligned with common user information needs.
arXiv Detail & Related papers (2022-07-01T19:09:08Z) - RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering [57.94658176442027]
We present RnG-KBQA, a Rank-and-Generate approach for KBQA.
We achieve new state-of-the-art results on GrailQA and WebQSP datasets.
arXiv Detail & Related papers (2021-09-17T17:58:28Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.