MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection
- URL: http://arxiv.org/abs/2010.04970v1
- Date: Sat, 10 Oct 2020 10:36:58 GMT
- Title: MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection
- Authors: Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie Zhou
- Abstract summary: We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
- Score: 59.95429407899612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As conventional answer selection (AS) methods generally match the question
with each candidate answer independently, they suffer from the lack of matching
information between the question and the candidate. To address this problem, we
propose a novel reinforcement learning (RL) based multi-step ranking model,
named MS-Ranker, which accumulates information from potentially correct
candidate answers as extra evidence for matching the question with a candidate.
In specific, we explicitly consider the potential correctness of candidates and
update the evidence with a gating mechanism. Moreover, as we use a listwise
ranking reward, our model learns to pay more attention to the overall
performance. Experiments on two benchmarks, namely WikiQA and SemEval-2016 CQA,
show that our model significantly outperforms existing methods that do not rely
on external resources.
Related papers
- Differentiating Choices via Commonality for Multiple-Choice Question Answering [54.04315943420376]
Multiple-choice question answering can provide valuable clues for choosing the right answer.
Existing models often rank each choice separately, overlooking the context provided by other choices.
We propose a novel model by differentiating choices through identifying and eliminating their commonality, called DCQA.
arXiv Detail & Related papers (2024-08-21T12:05:21Z) - Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Double Retrieval and Ranking for Accurate Question Answering [120.69820139008138]
We show that an answer verification step introduced in Transformer-based answer selection models can significantly improve the state of the art in Question Answering.
The results on three well-known datasets for AS2 show consistent and significant improvement of the state of the art.
arXiv Detail & Related papers (2022-01-16T06:20:07Z) - Answer Generation for Retrieval-based Question Answering Systems [80.28727681633096]
We train a sequence to sequence transformer model to generate an answer from a candidate set.
Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.
arXiv Detail & Related papers (2021-06-02T05:45:49Z) - A Clarifying Question Selection System from NTES_ALONG in Convai3
Challenge [8.656503175492375]
This paper presents the participation of NetEase Game AI Lab team for the ClariQ challenge at Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020.
The challenge asks for a complete conversational information retrieval system that can understanding and generating clarification questions.
We propose a clarifying question selection system which consists of response understanding, candidate question recalling and clarifying question ranking.
arXiv Detail & Related papers (2020-10-27T11:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.