Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
- URL: http://arxiv.org/abs/2212.08635v3
- Date: Thu, 28 Mar 2024 06:06:59 GMT
- Title: Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
- Authors: Junlong Li, Jinyuan Wang, Zhuosheng Zhang, Hai Zhao,
- Abstract summary: Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing background documents.
This task becomes notably challenging in a zero-shot setting where no data is available to train tailored retrieval-reader models.
We propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of Large Language Models.
- Score: 67.08732962244301
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing specific background documents. This task becomes notably challenging in a zero-shot setting where no data is available to train tailored retrieval-reader models. While recent Large Language Models (LLMs) like GPT-3 have demonstrated their effectiveness in zero-shot ODQA using direct prompting methods, these methods still fall short of fully harnessing the potential of LLMs when implicitly invoked. In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of LLMs and their strong instruction understanding abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo QA pairs with background passages and explanations entirely from scratch. These generated elements are then utilized for in-context learning. Experimental results show that our method significantly surpasses previous state-of-the-art zero-shot methods on three widely-used ODQA datasets and even achieves comparable performance with various customized fine-tuned models on full training data. Our code is available at https://github.com/lockon-n/self-prompting.
Related papers
- Extract, Define, Canonicalize: An LLM-based Framework for Knowledge Graph Construction [12.455647753787442]
We propose a three-phase framework named Extract-Define-Canonicalize (EDC)
EDC is flexible in that it can be applied to settings where a pre-defined target schema is available and when it is not.
We demonstrate EDC is able to extract high-quality triplets without any parameter tuning and with significantly larger schemas compared to prior works.
arXiv Detail & Related papers (2024-04-05T02:53:51Z) - Prompt-Time Symbolic Knowledge Capture with Large Language Models [0.0]
Augmenting large language models (LLMs) with user-specific knowledge is crucial for real-world applications, such as personal AI assistants.
This paper investigates utilizing the existing LLM capabilities to enable prompt-driven knowledge capture.
arXiv Detail & Related papers (2024-02-01T08:15:28Z) - Learning to Prompt with Text Only Supervision for Vision-Language Models [107.282881515667]
One branch of methods adapts CLIP by learning prompts using visual information.
An alternative approach resorts to training-free methods by generating class descriptions from large language models.
We propose to combine the strengths of both streams by learning prompts using only text data.
arXiv Detail & Related papers (2024-01-04T18:59:49Z) - Open-source Large Language Models are Strong Zero-shot Query Likelihood
Models for Document Ranking [36.90911173089409]
Large language models (LLMs) have emerged as effective Query Likelihood Models (QLMs)
This paper focuses on investigating the genuine zero-shot ranking effectiveness of recent LLMs.
We introduce a novel state-of-the-art ranking system that integrates LLM-based QLMs with a hybrid zero-shot retriever.
arXiv Detail & Related papers (2023-10-20T02:54:42Z) - Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge
Graph Question Answering [7.888547093390469]
Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks.
We propose to augment the knowledge directly in the input of LLMs.
Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot.
arXiv Detail & Related papers (2023-06-07T04:15:21Z) - Allies: Prompting Large Language Model with Beam Search [107.38790111856761]
In this work, we propose a novel method called ALLIES.
Given an input query, ALLIES leverages LLMs to iteratively generate new queries related to the original query.
By iteratively refining and expanding the scope of the original query, ALLIES captures and utilizes hidden knowledge that may not be directly through retrieval.
arXiv Detail & Related papers (2023-05-24T06:16:44Z) - AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators [98.11286353828525]
GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks.
We propose AnnoLLM, which adopts a two-step approach, explain-then-annotate.
We build the first conversation-based information retrieval dataset employing AnnoLLM.
arXiv Detail & Related papers (2023-03-29T17:03:21Z) - From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language
Models [111.42052290293965]
Large language models (LLMs) have demonstrated excellent zero-shot generalization to new language tasks.
End-to-end training on vision and language data may bridge the disconnections, but is inflexible and computationally expensive.
We propose emphImg2Prompt, a plug-and-play module that provides the prompts that can bridge the aforementioned modality and task disconnections.
arXiv Detail & Related papers (2022-12-21T08:39:36Z) - Recitation-Augmented Language Models [85.30591349383849]
We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks.
Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance.
arXiv Detail & Related papers (2022-10-04T00:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.