RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering
- URL: http://arxiv.org/abs/2109.08678v1
- Date: Fri, 17 Sep 2021 17:58:28 GMT
- Title: RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering
- Authors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong
- Abstract summary: We present RnG-KBQA, a Rank-and-Generate approach for KBQA.
We achieve new state-of-the-art results on GrailQA and WebQSP datasets.
- Score: 57.94658176442027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing KBQA approaches, despite achieving strong performance on i.i.d. test
data, often struggle in generalizing to questions involving unseen KB schema
items. Prior ranking-based approaches have shown some success in
generalization, but suffer from the coverage issue. We present RnG-KBQA, a
Rank-and-Generate approach for KBQA, which remedies the coverage issue with a
generation model while preserving a strong generalization capability. Our
approach first uses a contrastive ranker to rank a set of candidate logical
forms obtained by searching over the knowledge graph. It then introduces a
tailored generation model conditioned on the question and the top-ranked
candidates to compose the final logical form. We achieve new state-of-the-art
results on GrailQA and WebQSP datasets. In particular, our method surpasses the
prior state-of-the-art by a large margin on the GrailQA leaderboard. In
addition, RnG-KBQA outperforms all prior approaches on the popular WebQSP
benchmark, even including the ones that use the oracle entity linking. The
experimental results demonstrate the effectiveness of the interplay between
ranking and generation, which leads to the superior performance of our proposed
approach across all settings with especially strong improvements in zero-shot
generalization.
Related papers
- ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Prompting Large Language Models with Chain-of-Thought for Few-Shot
Knowledge Base Question Generation [19.327008532572645]
Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question.
We propose Chain-of-Thought prompting, which is an in-context learning strategy for reasoning.
We conduct extensive experiments over three public KBQG datasets.
arXiv Detail & Related papers (2023-10-12T15:08:14Z) - FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base
Question Answering [24.394908238940904]
We propose a Fine-to-Coarse Composition framework for KBQA (FC-KBQA) to ensure the generalization ability and executability of the logical expression.
FC-KBQA derives new state-of-the-art performance on GrailQA and WebQSP, and runs 4 times faster than the baseline.
arXiv Detail & Related papers (2023-06-26T14:19:46Z) - KEPR: Knowledge Enhancement and Plausibility Ranking for Generative
Commonsense Question Answering [11.537283115693432]
We propose a Knowledge Enhancement and Plausibility Ranking approach grounded on the Generate-Then-Rank pipeline architecture.
Specifically, we expand questions in terms of Wiktionary commonsense knowledge of keywords, and reformulate them with normalized patterns.
We develop an ELECTRA-based answer ranking model, where logistic regression is conducted during training, with the aim of approxing different levels of plausibility.
arXiv Detail & Related papers (2023-05-15T04:58:37Z) - Knowledge Transfer from Answer Ranking to Answer Generation [97.38378660163414]
We propose to train a GenQA model by transferring knowledge from a trained AS2 model.
We also propose to use the AS2 model prediction scores for loss weighting and score-conditioned input/output shaping.
arXiv Detail & Related papers (2022-10-23T21:51:27Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases [63.43418760818188]
We release a new large-scale, high-quality dataset with 64,331 questions, GrailQA.
We propose a novel BERT-based KBQA model.
The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
arXiv Detail & Related papers (2020-11-16T06:36:26Z) - Generating Diverse and Consistent QA pairs from Contexts with
Information-Maximizing Hierarchical Conditional VAEs [62.71505254770827]
We propose a conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts.
Our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.
arXiv Detail & Related papers (2020-05-28T08:26:06Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.