Harnessing Multi-Role Capabilities of Large Language Models for
Open-Domain Question Answering
- URL: http://arxiv.org/abs/2403.05217v1
- Date: Fri, 8 Mar 2024 11:09:13 GMT
- Title: Harnessing Multi-Role Capabilities of Large Language Models for
Open-Domain Question Answering
- Authors: Hongda Sun, Yuxuan Liu, Chengwei Wu, Haiyu Yan, Cheng Tai, Xin Gao,
Shuo Shang, Rui Yan
- Abstract summary: Open-domain question answering (ODQA) has emerged as a pivotal research spotlight in information systems.
We propose a framework that formulates the ODQA process into three basic steps: query expansion, document selection, and answer generation.
We introduce a novel prompt optimization algorithm to refine role-playing prompts and steer LLMs to produce higher-quality evidence and answers.
- Score: 40.2758450304531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-domain question answering (ODQA) has emerged as a pivotal research
spotlight in information systems. Existing methods follow two main paradigms to
collect evidence: (1) The \textit{retrieve-then-read} paradigm retrieves
pertinent documents from an external corpus; and (2) the
\textit{generate-then-read} paradigm employs large language models (LLMs) to
generate relevant documents. However, neither can fully address multifaceted
requirements for evidence. To this end, we propose LLMQA, a generalized
framework that formulates the ODQA process into three basic steps: query
expansion, document selection, and answer generation, combining the superiority
of both retrieval-based and generation-based evidence. Since LLMs exhibit their
excellent capabilities to accomplish various tasks, we instruct LLMs to play
multiple roles as generators, rerankers, and evaluators within our framework,
integrating them to collaborate in the ODQA process. Furthermore, we introduce
a novel prompt optimization algorithm to refine role-playing prompts and steer
LLMs to produce higher-quality evidence and answers. Extensive experimental
results on widely used benchmarks (NQ, WebQ, and TriviaQA) demonstrate that
LLMQA achieves the best performance in terms of both answer accuracy and
evidence quality, showcasing its potential for advancing ODQA research and
applications.
Related papers
- mR$^2$AG: Multimodal Retrieval-Reflection-Augmented Generation for Knowledge-Based VQA [78.45521005703958]
multimodal Retrieval-Augmented Generation (mRAG) is naturally introduced to provide MLLMs with comprehensive and up-to-date knowledge.
We propose a novel framework called textbfRetrieval-textbfReftextbfAugmented textbfGeneration (mR$2$AG) which achieves adaptive retrieval and useful information localization.
mR$2$AG significantly outperforms state-of-the-art MLLMs on INFOSEEK and Encyclopedic-VQA
arXiv Detail & Related papers (2024-11-22T16:15:50Z) - IDEAL: Leveraging Infinite and Dynamic Characterizations of Large Language Models for Query-focused Summarization [59.06663981902496]
Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization.
We investigate two indispensable characteristics that the LLMs-based QFS models should be harnessed, Lengthy Document Summarization and Efficiently Fine-grained Query-LLM Alignment.
These innovations pave the way for broader application and accessibility in the field of QFS technology.
arXiv Detail & Related papers (2024-07-15T07:14:56Z) - Peering into the Mind of Language Models: An Approach for Attribution in Contextual Question Answering [9.86691461253151]
We introduce a novel method for attribution in contextual question answering, leveraging the hidden state representations of large language models (LLMs)
Our approach bypasses the need for extensive model retraining and retrieval model overhead, offering granular attributions and preserving the quality of generated answers.
We present Verifiability-granular, an attribution dataset which has token level annotations for LLM generations in the contextual question answering setup.
arXiv Detail & Related papers (2024-05-28T09:12:44Z) - MILL: Mutual Verification with Large Language Models for Zero-Shot Query Expansion [39.24969189479343]
We propose a novel zero-shot query expansion framework utilizing large language models (LLMs) for mutual verification.
Our proposed method is fully zero-shot, and extensive experiments on three public benchmark datasets are conducted to demonstrate its effectiveness.
arXiv Detail & Related papers (2023-10-29T16:04:10Z) - Query2doc: Query Expansion with Large Language Models [69.9707552694766]
The proposed method first generates pseudo- documents by few-shot prompting large language models (LLMs)
query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets.
Our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.
arXiv Detail & Related papers (2023-03-14T07:27:30Z) - Recitation-Augmented Language Models [85.30591349383849]
We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks.
Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance.
arXiv Detail & Related papers (2022-10-04T00:49:20Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.