KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce
Programs over Low-resourced Knowledge Bases
- URL: http://arxiv.org/abs/2402.01619v1
- Date: Fri, 2 Feb 2024 18:32:24 GMT
- Title: KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce
Programs over Low-resourced Knowledge Bases
- Authors: Jiajie Zhang, Shulin Cao, Linmei Hu, Ling Feng, Lei Hou, Juanzi Li
- Abstract summary: Program induction (PI) has become a promising paradigm for using knowledge bases (KBs) to help large language models (LLMs) answer complex questions.
We propose KB-resourced, a plug-and-play framework that enables LLMs to induce programs over any low-resourced KB.
- Score: 49.010104412978436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Program induction (PI) has become a promising paradigm for using knowledge
bases (KBs) to help large language models (LLMs) answer complex
knowledge-intensive questions. Nonetheless, PI typically relies on a large
number of parallel question-program pairs to make the LLM aware of the schema
of the given KB, and is thus challenging for many low-resourced KBs that lack
annotated data. To this end, we propose KB-Plugin, a plug-and-play framework
that enables LLMs to induce programs over any low-resourced KB. Firstly,
KB-Plugin adopts self-supervised learning to encode the detailed schema
information of a given KB into a pluggable module, namely schema plugin.
Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB
to train another pluggable module, namely PI plugin, which can help the LLM
extract question-relevant schema information from the schema plugin of any KB
and utilize this information to induce programs over this KB. Experiments on
five heterogeneous KBQA datasets show that KB-Plugin achieves better or
comparable performance with 25$\times$ smaller backbone LLM compared to SoTA PI
methods for low-resourced KBs, and even approaches the performance of
supervised methods. Our code and data are available at
https://github.com/THU-KEG/KB-Plugin.
Related papers
- KBLaM: Knowledge Base augmented Language Model [8.247901935078357]
We propose Knowledge Base augmented Language Model (KBLaM) for augmenting Large Language Models with external knowledge.
KBLaM works with a knowledge base constructed from a corpus of documents, transforming each piece of knowledge in the KB into continuous key-value vector pairs.
Experiments demonstrate KBLaM's effectiveness in various tasks, including question-answering and open-ended reasoning.
arXiv Detail & Related papers (2024-10-14T12:45:10Z) - Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning [20.80841972133938]
Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data.
We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples.
We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers.
arXiv Detail & Related papers (2023-11-15T11:56:56Z) - KnowledGPT: Enhancing Large Language Models with Retrieval and Storage
Access on Knowledge Bases [55.942342665806656]
KnowledGPT is a comprehensive framework to bridge large language models with various knowledge bases.
The retrieval process employs the program of thought prompting, which generates search language for KBs in code format.
KnowledGPT offers the capability to store knowledge in a personalized KB, catering to individual user demands.
arXiv Detail & Related papers (2023-08-17T13:07:00Z) - Make a Choice! Knowledge Base Question Answering with In-Context
Learning [1.7827767384590838]
Question answering over knowledge bases (KBQA) aims to answer factoid questions with a given knowledge base (KB)
Due to the large scale of KB, annotated data is impossible to cover all fact schemas in KB.
We present McL-KBQA, a framework that incorporates the few-shot ability of LLM into the KBQA method via ICL-based multiple choice.
arXiv Detail & Related papers (2023-05-23T11:56:03Z) - Cross-Lingual Question Answering over Knowledge Base as Reading
Comprehension [61.079852289005025]
Cross-lingual question answering over knowledge base (xKBQA) aims to answer questions in languages different from that of the provided knowledge base.
One of the major challenges facing xKBQA is the high cost of data annotation.
We propose a novel approach for xKBQA in a reading comprehension paradigm.
arXiv Detail & Related papers (2023-02-26T05:52:52Z) - QA Is the New KR: Question-Answer Pairs as Knowledge Bases [105.692569000534]
We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB.
Unlike a traditional KB, this information store is well-aligned with common user information needs.
arXiv Detail & Related papers (2022-07-01T19:09:08Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems [79.02430277138801]
The knowledge base (KB) plays an essential role in fulfilling user requests.
End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries.
We propose a method to embed the KB, of any size, directly into the model parameters.
arXiv Detail & Related papers (2020-09-28T22:13:54Z) - Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base [34.837700505583]
We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB.
This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs.
arXiv Detail & Related papers (2020-02-14T16:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.