Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering
- URL: http://arxiv.org/abs/2005.01218v1
- Date: Mon, 4 May 2020 00:19:48 GMT
- Title: Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering
- Authors: Vikas Yadav, Steven Bethard and Mihai Surdeanu
- Abstract summary: We introduce a simple, fast, and unsupervised iterative evidence retrieval method.
Despite its simplicity, our approach outperforms all the previous methods on the evidence selection task.
When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance.
- Score: 40.58976291178477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evidence retrieval is a critical stage of question answering (QA), necessary
not only to improve performance, but also to explain the decisions of the
corresponding QA method. We introduce a simple, fast, and unsupervised
iterative evidence retrieval method, which relies on three ideas: (a) an
unsupervised alignment approach to soft-align questions and answers with
justification sentences using only GloVe embeddings, (b) an iterative process
that reformulates queries focusing on terms that are not covered by existing
justifications, which (c) a stopping criterion that terminates retrieval when
the terms in the given question and candidate answers are covered by the
retrieved justifications. Despite its simplicity, our approach outperforms all
the previous methods (including supervised methods) on the evidence selection
task on two datasets: MultiRC and QASC. When these evidence sentences are fed
into a RoBERTa answer classification component, we achieve state-of-the-art QA
performance on these two datasets.
Related papers
- Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models [17.60243337898751]
We present a Chain-of-Action framework for multimodal and retrieval-augmented Question-Answering (QA)
Compared to the literature, CoA overcomes two major challenges of current QA applications: (i) unfaithful hallucination that is inconsistent with real-time or domain facts and (ii) weak reasoning performance over compositional information.
arXiv Detail & Related papers (2024-03-26T03:51:01Z) - Mastering the ABCDs of Complex Questions: Answer-Based Claim
Decomposition for Fine-grained Self-Evaluation [9.776667356119352]
We propose answer-based claim decomposition (ABCD), a prompting strategy that decomposes questions into true/false claims.
Using the decomposed ABCD claims, we perform fine-grained self-evaluation.
We find that GPT-3.5 has some ability to determine to what extent its answer satisfies the criteria of the input question.
arXiv Detail & Related papers (2023-05-24T05:53:11Z) - Diverse Multi-Answer Retrieval with Determinantal Point Processes [11.925050407713597]
We propose a re-ranking based approach using Determinantal point processes utilizing BERT as kernels.
Results demonstrate that our re-ranking technique outperforms state-of-the-art method on the AmbigQA dataset.
arXiv Detail & Related papers (2022-11-29T08:54:05Z) - GERE: Generative Evidence Retrieval for Fact Verification [57.78768817972026]
We propose GERE, the first system that retrieves evidences in a generative fashion.
The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-12T03:49:35Z) - Double Retrieval and Ranking for Accurate Question Answering [120.69820139008138]
We show that an answer verification step introduced in Transformer-based answer selection models can significantly improve the state of the art in Question Answering.
The results on three well-known datasets for AS2 show consistent and significant improvement of the state of the art.
arXiv Detail & Related papers (2022-01-16T06:20:07Z) - Adaptive Information Seeking for Open-Domain Question Answering [61.39330982757494]
We propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO.
According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step.
AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
arXiv Detail & Related papers (2021-09-14T15:08:13Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.