Double Retrieval and Ranking for Accurate Question Answering
- URL: http://arxiv.org/abs/2201.05981v1
- Date: Sun, 16 Jan 2022 06:20:07 GMT
- Title: Double Retrieval and Ranking for Accurate Question Answering
- Authors: Zeyu Zhang, Thuy Vu, Alessandro Moschitti
- Abstract summary: We show that an answer verification step introduced in Transformer-based answer selection models can significantly improve the state of the art in Question Answering.
The results on three well-known datasets for AS2 show consistent and significant improvement of the state of the art.
- Score: 120.69820139008138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that an answer verification step introduced in
Transformer-based answer selection models can significantly improve the state
of the art in Question Answering. This step is performed by aggregating the
embeddings of top $k$ answer candidates to support the verification of a target
answer. Although the approach is intuitive and sound still shows two
limitations: (i) the supporting candidates are ranked only according to the
relevancy with the question and not with the answer, and (ii) the support
provided by the other answer candidates is suboptimal as these are retrieved
independently of the target answer. In this paper, we address both drawbacks by
proposing (i) a double reranking model, which, for each target answer, selects
the best support; and (ii) a second neural retrieval stage designed to encode
question and answer pair as the query, which finds more specific verification
information. The results on three well-known datasets for AS2 show consistent
and significant improvement of the state of the art.
Related papers
- Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - Towards Reliable and Factual Response Generation: Detecting Unanswerable
Questions in Information-Seeking Conversations [16.99952884041096]
Generative AI models face the challenge of hallucinations that can undermine users' trust in such systems.
We approach the problem of conversational information seeking as a two-step process, where relevant passages in a corpus are identified first and then summarized into a final system response.
Specifically, our proposed method employs a sentence-level classifier to detect if the answer is present, then aggregates these predictions on the passage level, and eventually across the top-ranked passages to arrive at a final answerability estimate.
arXiv Detail & Related papers (2024-01-21T10:15:36Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue
Systems [71.33737787564966]
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap'
We propose a reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system.
Our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results.
arXiv Detail & Related papers (2022-11-07T15:59:49Z) - Answer Generation for Retrieval-based Question Answering Systems [80.28727681633096]
We train a sequence to sequence transformer model to generate an answer from a candidate set.
Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.
arXiv Detail & Related papers (2021-06-02T05:45:49Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z) - Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering [40.58976291178477]
We introduce a simple, fast, and unsupervised iterative evidence retrieval method.
Despite its simplicity, our approach outperforms all the previous methods on the evidence selection task.
When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance.
arXiv Detail & Related papers (2020-05-04T00:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.