Reader-Guided Passage Reranking for Open-Domain Question Answering
- URL: http://arxiv.org/abs/2101.00294v1
- Date: Fri, 1 Jan 2021 18:54:19 GMT
- Title: Reader-Guided Passage Reranking for Open-Domain Question Answering
- Authors: Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao,
Jiawei Han, Weizhu Chen
- Abstract summary: We propose a simple and effective passage reranking method, Reader-guIDEd Reranker (Rider)
Rider achieves 10 to 20 absolute gains in top-1 retrieval accuracy and 1 to 4 Exact Match (EM) score gains without refining the retriever or reader.
Rider achieves 48.3 EM on the Natural Questions dataset and 66.4 on the TriviaQA dataset when only 1,024 tokens (7.8 passages on average) are used as the reader input.
- Score: 103.18340682345533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current open-domain question answering (QA) systems often follow a
Retriever-Reader (R2) architecture, where the retriever first retrieves
relevant passages and the reader then reads the retrieved passages to form an
answer. In this paper, we propose a simple and effective passage reranking
method, Reader-guIDEd Reranker (Rider), which does not involve any training and
reranks the retrieved passages solely based on the top predictions of the
reader before reranking. We show that Rider, despite its simplicity, achieves
10 to 20 absolute gains in top-1 retrieval accuracy and 1 to 4 Exact Match (EM)
score gains without refining the retriever or reader. In particular, Rider
achieves 48.3 EM on the Natural Questions dataset and 66.4 on the TriviaQA
dataset when only 1,024 tokens (7.8 passages on average) are used as the reader
input.
Related papers
- LoRE: Logit-Ranked Retriever Ensemble for Enhancing Open-Domain Question Answering [0.0]
We propose LoRE, a novel approach that improves answer accuracy and relevance by mitigating positional bias.
LoRE employs an ensemble of diverse retrievers, such as BM25 and sentence transformers with FAISS indexing.
A key innovation is a logit-based answer ranking algorithm that combines the logit scores from a large language model with the retrieval ranks of the passages.
arXiv Detail & Related papers (2024-10-13T23:06:08Z) - Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking [57.44361768117688]
We propose BEER$2$, a Bidirectional End-to-End training framework for Retriever and Reader.
Through our designed bidirectional end-to-end training, BEER$2$ guides the retriever and the reader to learn from each other, make progress together, and ultimately improve EL performance.
arXiv Detail & Related papers (2023-06-21T13:04:30Z) - ReFIT: Relevance Feedback from a Reranker during Inference [109.33278799999582]
Retrieve-and-rerank is a prevalent framework in neural information retrieval.
We propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time.
arXiv Detail & Related papers (2023-05-19T15:30:33Z) - Improving Passage Retrieval with Zero-Shot Question Generation [109.11542468380331]
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering.
The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage.
arXiv Detail & Related papers (2022-04-15T14:51:41Z) - End-to-End Training of Neural Retrievers for Open-Domain Question
Answering [32.747113232867825]
It remains unclear how unsupervised and supervised methods can be used most effectively for neural retrievers.
We propose an approach of unsupervised pre-training with the Inverse Cloze Task and masked salient spans.
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.
arXiv Detail & Related papers (2021-01-02T09:05:34Z) - Distilling Knowledge from Reader to Retriever for Question Answering [16.942581590186343]
We propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation.
We evaluate our method on question answering, obtaining state-of-the-art results.
arXiv Detail & Related papers (2020-12-08T17:36:34Z) - Is Retriever Merely an Approximator of Reader? [27.306407064073177]
We show that the reader and the retriever are complementary to each other even in terms of accuracy only.
We propose to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit.
arXiv Detail & Related papers (2020-10-21T13:40:15Z) - No Answer is Better Than Wrong Answer: A Reflection Model for Document
Level Machine Reading Comprehension [92.57688872599998]
We propose a novel approach to handle all answer types systematically.
In particular, we propose a novel approach called Reflection Net which leverages a two-step training procedure to identify the no-answer and wrong-answer cases.
Our approach achieved the top 1 on both long and short answer leaderboard, with F1 scores of 77.2 and 64.1, respectively.
arXiv Detail & Related papers (2020-09-25T06:57:52Z) - Open-Domain Question Answering with Pre-Constructed Question Spaces [70.13619499853756]
Open-domain question answering aims at solving the task of locating the answers to user-generated questions in massive collections of documents.
There are two families of solutions available: retriever-readers, and knowledge-graph-based approaches.
We propose a novel algorithm with a reader-retriever structure that differs from both families.
arXiv Detail & Related papers (2020-06-02T04:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.