Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge
Graph Question Answering
- URL: http://arxiv.org/abs/2306.04136v1
- Date: Wed, 7 Jun 2023 04:15:21 GMT
- Title: Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge
Graph Question Answering
- Authors: Jinheon Baek, Alham Fikri Aji, Amir Saffari
- Abstract summary: Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks.
We propose to augment the knowledge directly in the input of LLMs.
Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot.
- Score: 7.888547093390469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) are capable of performing zero-shot closed-book
question answering tasks, based on their internal knowledge stored in
parameters during pre-training. However, such internalized knowledge might be
insufficient and incorrect, which could lead LLMs to generate factually wrong
answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive.
To this end, we propose to augment the knowledge directly in the input of LLMs.
Specifically, we first retrieve the relevant facts to the input question from
the knowledge graph based on semantic similarities between the question and its
associated facts. After that, we prepend the retrieved facts to the input
question in the form of the prompt, which is then forwarded to LLMs to generate
the answer. Our framework, Knowledge-Augmented language model PromptING
(KAPING), requires no model training, thus completely zero-shot. We validate
the performance of our KAPING framework on the knowledge graph question
answering task, that aims to answer the user's question based on facts over a
knowledge graph, on which ours outperforms relevant zero-shot baselines by up
to 48% in average, across multiple LLMs of various sizes.
Related papers
- Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts [50.06633829833144]
Large Language Models (LLMs) are effective in performing various NLP tasks, but struggle to handle tasks that require extensive, real-world knowledge.
We propose a benchmark that requires knowledge of long-tail facts for answering the involved questions.
Our experiments show that LLMs alone struggle with answering these questions, especially when the long-tail level is high or rich knowledge is required.
arXiv Detail & Related papers (2024-05-10T15:10:20Z) - On Exploring the Reasoning Capability of Large Language Models with
Knowledge Graphs [11.878708460150726]
Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs.
To address these questions, we employ LLMs to perform four distinct knowledge graph reasoning tasks.
Our experimental results demonstrate that LLMs can successfully tackle both simple and complex knowledge graph reasoning tasks from their own memory.
arXiv Detail & Related papers (2023-12-01T05:08:47Z) - Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism [0.0]
Large language models (LLMs) have demonstrated impressive language understanding and generation capabilities.
These models are not flawless and often produce responses that contain errors or misinformation.
We propose a refusal mechanism that instructs LLMs to refuse to answer challenging questions in order to avoid errors.
arXiv Detail & Related papers (2023-11-02T07:20:49Z) - Self-Knowledge Guided Retrieval Augmentation for Large Language Models [59.771098292611846]
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
Retrieval-based methods can offer non-parametric world knowledge and improve the performance on tasks such as question answering.
Self-Knowledge guided Retrieval augmentation (SKR) is a simple yet effective method which can let LLMs refer to the questions they have previously encountered.
arXiv Detail & Related papers (2023-10-08T04:22:33Z) - FreshLLMs: Refreshing Large Language Models with Search Engine
Augmentation [92.43001160060376]
We study the factuality of large language models (LLMs) in the context of answering questions that test current world knowledge.
We introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types.
We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination.
Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA.
arXiv Detail & Related papers (2023-10-05T00:04:12Z) - Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from
Knowledge Graphs [19.0797968186656]
Large language models (LLMs) are versatile and can solve different tasks due to their emergent ability and generalizability.
In some previous works, additional modules like graph neural networks (GNNs) are trained on retrieved knowledge from external knowledge bases.
arXiv Detail & Related papers (2023-09-06T15:55:01Z) - Statistical Knowledge Assessment for Large Language Models [79.07989821512128]
Given varying prompts regarding a factoid question, can a large language model (LLM) reliably generate factually correct answers?
We propose KaRR, a statistical approach to assess factual knowledge for LLMs.
Our results reveal that the knowledge in LLMs with the same backbone architecture adheres to the scaling law, while tuning on instruction-following data sometimes compromises the model's capability to generate factually correct text reliably.
arXiv Detail & Related papers (2023-05-17T18:54:37Z) - Self-Prompting Large Language Models for Zero-Shot Open-Domain QA [67.08732962244301]
Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing background documents.
This task becomes notably challenging in a zero-shot setting where no data is available to train tailored retrieval-reader models.
We propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of Large Language Models.
arXiv Detail & Related papers (2022-12-16T18:23:43Z) - Empowering Language Models with Knowledge Graph Reasoning for Question
Answering [117.79170629640525]
We propose knOwledge REasOning empowered Language Model (OREO-LM)
OREO-LM consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs.
We show significant performance gain, achieving state-of-art results in the Closed-Book setting.
arXiv Detail & Related papers (2022-11-15T18:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.