Joint Passage Ranking for Diverse Multi-Answer Retrieval
- URL: http://arxiv.org/abs/2104.08445v1
- Date: Sat, 17 Apr 2021 04:48:36 GMT
- Title: Joint Passage Ranking for Diverse Multi-Answer Retrieval
- Authors: Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Hannaneh
Hajishirzi
- Abstract summary: We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a question.
This task requires joint modeling of retrieved passages, as models should not repeatedly retrieve passages containing the same answer at the cost of missing a different valid answer.
In this paper, we introduce JPR, a joint passage retrieval model focusing on reranking. To model the joint probability of the retrieved passages, JPR makes use of an autoregressive reranker that selects a sequence of passages, equipped with novel training and decoding algorithms.
- Score: 56.43443577137929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study multi-answer retrieval, an under-explored problem that requires
retrieving passages to cover multiple distinct answers for a given question.
This task requires joint modeling of retrieved passages, as models should not
repeatedly retrieve passages containing the same answer at the cost of missing
a different valid answer. Prior work focusing on single-answer retrieval is
limited as it cannot reason about the set of passages jointly. In this paper,
we introduce JPR, a joint passage retrieval model focusing on reranking. To
model the joint probability of the retrieved passages, JPR makes use of an
autoregressive reranker that selects a sequence of passages, equipped with
novel training and decoding algorithms. Compared to prior approaches, JPR
achieves significantly better answer coverage on three multi-answer datasets.
When combined with downstream question answering, the improved retrieval
enables larger answer generation models since they need to consider fewer
passages, establishing a new state-of-the-art.
Related papers
- EfficientRAG: Efficient Retriever for Multi-Hop Question Answering [52.64500643247252]
We introduce EfficientRAG, an efficient retriever for multi-hop question answering.
Experimental results demonstrate that EfficientRAG surpasses existing RAG methods on three open-domain multi-hop question-answering datasets.
arXiv Detail & Related papers (2024-08-08T06:57:49Z) - Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with an Iterative Approach [6.549143816134531]
We propose a novel iterative RAG method called ReSP, equipped with a dual-function summarizer.
Experimental results on the multi-hop question-answering HotpotQA and 2WikiMultihopQA demonstrate that our method significantly outperforms the state-of-the-art.
arXiv Detail & Related papers (2024-07-18T02:19:00Z) - Modeling Uncertainty and Using Post-fusion as Fallback Improves Retrieval Augmented Generation with LLMs [80.74263278847063]
The integration of retrieved passages and large language models (LLMs) has significantly contributed to improving open-domain question answering.
This paper investigates different methods of combining retrieved passages with LLMs to enhance answer generation.
arXiv Detail & Related papers (2023-08-24T05:26:54Z) - Phrase Retrieval for Open-Domain Conversational Question Answering with
Conversational Dependency Modeling via Contrastive Learning [54.55643652781891]
Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation.
We propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words.
arXiv Detail & Related papers (2023-06-07T09:46:38Z) - Enhancing Multi-modal and Multi-hop Question Answering via Structured
Knowledge and Unified Retrieval-Generation [33.56304858796142]
Multi-modal multi-hop question answering involves answering a question by reasoning over multiple input sources from different modalities.
Existing methods often retrieve evidences separately and then use a language model to generate an answer based on the retrieved evidences.
We propose a Structured Knowledge and Unified Retrieval-Generation (RG) approach to address these issues.
arXiv Detail & Related papers (2022-12-16T18:12:04Z) - Improving Passage Retrieval with Zero-Shot Question Generation [109.11542468380331]
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering.
The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage.
arXiv Detail & Related papers (2022-04-15T14:51:41Z) - MCR-Net: A Multi-Step Co-Interactive Relation Network for Unanswerable
Questions on Machine Reading Comprehension [14.926981547759182]
We propose a Multi-Step Co-Interactive Relation Network (MCR-Net) to explicitly model the mutual interaction between the question and passage.
We show that our model achieves a remarkable improvement, outperforming the BERT-style baselines in literature.
arXiv Detail & Related papers (2021-03-08T06:38:14Z) - Memory Augmented Sequential Paragraph Retrieval for Multi-hop Question
Answering [32.69969157825044]
We propose a new architecture that models paragraphs as sequential data and considers multi-hop information retrieval as a kind of sequence labeling task.
We evaluate our method on both full wiki and distractor subtask of HotpotQA, a public textual multi-hop QA dataset.
arXiv Detail & Related papers (2021-02-07T08:15:51Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.