Make a Choice! Knowledge Base Question Answering with In-Context
Learning
- URL: http://arxiv.org/abs/2305.13972v1
- Date: Tue, 23 May 2023 11:56:03 GMT
- Title: Make a Choice! Knowledge Base Question Answering with In-Context
Learning
- Authors: Chuanyuan Tan, Yuehe Chen, Wenbiao Shao, Wenliang Chen
- Abstract summary: Question answering over knowledge bases (KBQA) aims to answer factoid questions with a given knowledge base (KB)
Due to the large scale of KB, annotated data is impossible to cover all fact schemas in KB.
We present McL-KBQA, a framework that incorporates the few-shot ability of LLM into the KBQA method via ICL-based multiple choice.
- Score: 1.7827767384590838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question answering over knowledge bases (KBQA) aims to answer factoid
questions with a given knowledge base (KB). Due to the large scale of KB,
annotated data is impossible to cover all fact schemas in KB, which poses a
challenge to the generalization ability of methods that require a sufficient
amount of annotated data. Recently, LLMs have shown strong few-shot performance
in many NLP tasks. We expect LLM can help existing methods improve their
generalization ability, especially in low-resource situations. In this paper,
we present McL-KBQA, a framework that incorporates the few-shot ability of LLM
into the KBQA method via ICL-based multiple choice and then improves the
effectiveness of the QA tasks. Experimental results on two KBQA datasets
demonstrate the competitive performance of McL-KBQA with strong improvements in
generalization. We expect to explore a new way to QA tasks from KBQA in
conjunction with LLM, how to generate answers normatively and correctly with
strong generalization.
Related papers
- A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models [7.399563588835834]
Interactive-KBQA is a framework designed to generate logical forms through direct interaction with knowledge bases (KBs)
Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets.
arXiv Detail & Related papers (2024-02-23T06:32:18Z) - Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning [20.80841972133938]
Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data.
We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples.
We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers.
arXiv Detail & Related papers (2023-11-15T11:56:56Z) - An In-Context Schema Understanding Method for Knowledge Base Question
Answering [70.87993081445127]
Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task.
Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.
We propose a simple In-Context Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning.
arXiv Detail & Related papers (2023-10-22T04:19:17Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base
Question Answering [16.88132219032486]
We introduce FlexKBQA to mitigate the burden associated with manual annotation.
We leverage Large Language Models (LLMs) as program translators for addressing the challenges inherent in the few-shot KBQA task.
Specifically, FlexKBQA leverages automated algorithms to sample diverse programs, such as SPARQL queries, from the knowledge base.
We observe that under the few-shot even the more challenging zero-shot scenarios, FlexKBQA achieves impressive results with a few annotations.
arXiv Detail & Related papers (2023-08-23T11:00:36Z) - Few-shot In-context Learning for Knowledge Base Question Answering [31.73274700847965]
We propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks.
The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations.
arXiv Detail & Related papers (2023-05-02T19:31:55Z) - Cross-Lingual Question Answering over Knowledge Base as Reading
Comprehension [61.079852289005025]
Cross-lingual question answering over knowledge base (xKBQA) aims to answer questions in languages different from that of the provided knowledge base.
One of the major challenges facing xKBQA is the high cost of data annotation.
We propose a novel approach for xKBQA in a reading comprehension paradigm.
arXiv Detail & Related papers (2023-02-26T05:52:52Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases [63.43418760818188]
We release a new large-scale, high-quality dataset with 64,331 questions, GrailQA.
We propose a novel BERT-based KBQA model.
The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
arXiv Detail & Related papers (2020-11-16T06:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.