Complementary Evidence Identification in Open-Domain Question Answering
- URL: http://arxiv.org/abs/2103.11643v2
- Date: Tue, 23 Mar 2021 06:35:05 GMT
- Title: Complementary Evidence Identification in Open-Domain Question Answering
- Authors: Xiangyang Mou, Mo Yu, Shiyu Chang, Yufei Feng, Li Zhang and Hui Su
- Abstract summary: We propose a new problem of complementary evidence identification for open-domain question answering (QA)
The problem aims to efficiently find a small set of passages that covers full evidence from multiple aspects as to answer a complex question.
- Score: 66.17954897343456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a new problem of complementary evidence identification
for open-domain question answering (QA). The problem aims to efficiently find a
small set of passages that covers full evidence from multiple aspects as to
answer a complex question. To this end, we proposes a method that learns vector
representations of passages and models the sufficiency and diversity within the
selected set, in addition to the relevance between the question and passages.
Our experiments demonstrate that our method considers the dependence within the
supporting evidence and significantly improves the accuracy of complementary
evidence selection in QA domain.
Related papers
- Diversify-verify-adapt: Efficient and Robust Retrieval-Augmented Ambiguous Question Answering [45.154063285999015]
The retrieval augmented generation (RAG) framework addresses an ambiguity in user queries in QA systems.
RAG retrieves passages that cover all plausible interpretations and generates comprehensive responses.
However, a single retrieval process often suffers from low quality results.
We propose a diversify-verify-adapt (DIVA) framework to address this problem.
arXiv Detail & Related papers (2024-09-04T01:14:04Z) - Progressive Evidence Refinement for Open-domain Multimodal Retrieval
Question Answering [20.59485758381809]
Current multimodal retrieval question-answering models face two main challenges.
utilizing compressed evidence features as input to the model results in the loss of fine-grained information within the evidence.
We propose a two-stage framework for evidence retrieval and question-answering to alleviate these issues.
arXiv Detail & Related papers (2023-10-15T01:18:39Z) - Evidentiality-aware Retrieval for Overcoming Abstractiveness in
Open-Domain Question Answering [29.00167886463793]
We propose Evidentiality-Aware Passage Retrieval (EADPR) to learn to discriminate evidence passages from distractors.
We conduct extensive experiments to validate the effectiveness of our proposed method on multiple abstractive ODQA tasks.
arXiv Detail & Related papers (2023-04-06T12:42:37Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z) - Adaptive Information Seeking for Open-Domain Question Answering [61.39330982757494]
We propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO.
According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step.
AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
arXiv Detail & Related papers (2021-09-14T15:08:13Z) - Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions [65.60888490988236]
We release a dataset focused on open-domain single- and multi-turn conversations.
We benchmark several state-of-the-art neural baselines.
We propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues.
arXiv Detail & Related papers (2021-09-13T09:16:14Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z) - Context Modeling with Evidence Filter for Multiple Choice Question
Answering [18.154792554957595]
Multiple-Choice Question Answering (MCQA) is a challenging task in machine reading comprehension.
The main challenge is to extract "evidence" from the given context that supports the correct answer.
Existing work tackles this problem by annotated evidence or distant supervision with rules which overly rely on human efforts.
We propose a simple yet effective approach termed evidence filtering to model the relationships between the encoded contexts.
arXiv Detail & Related papers (2020-10-06T11:53:23Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.