Adaptive Information Seeking for Open-Domain Question Answering
- URL: http://arxiv.org/abs/2109.06747v1
- Date: Tue, 14 Sep 2021 15:08:13 GMT
- Title: Adaptive Information Seeking for Open-Domain Question Answering
- Authors: Yunchang Zhu, Liang Pang, Yanyan Lan, Huawei Shen, Xueqi Cheng
- Abstract summary: We propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO.
According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step.
AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
- Score: 61.39330982757494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information seeking is an essential step for open-domain question answering
to efficiently gather evidence from a large corpus. Recently, iterative
approaches have been proven to be effective for complex questions, by
recursively retrieving new evidence at each step. However, almost all existing
iterative approaches use predefined strategies, either applying the same
retrieval function multiple times or fixing the order of different retrieval
functions, which cannot fulfill the diverse requirements of various questions.
In this paper, we propose a novel adaptive information-seeking strategy for
open-domain question answering, namely AISO. Specifically, the whole retrieval
and answer process is modeled as a partially observed Markov decision process,
where three types of retrieval operations (e.g., BM25, DPR, and hyperlink) and
one answer operation are defined as actions. According to the learned policy,
AISO could adaptively select a proper retrieval action to seek the missing
evidence at each step, based on the collected evidence and the reformulated
query, or directly output the answer when the evidence set is sufficient for
the question. Experiments on SQuAD Open and HotpotQA fullwiki, which serve as
single-hop and multi-hop open-domain QA benchmarks, show that AISO outperforms
all baseline methods with predefined strategies in terms of both retrieval and
answer evaluations.
Related papers
- Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent [102.31558123570437]
Multimodal Retrieval Augmented Generation (mRAG) plays an important role in mitigating the "hallucination" issue inherent in multimodal large language models (MLLMs)
We propose the first self-adaptive planning agent for multimodal retrieval, OmniSearch.
arXiv Detail & Related papers (2024-11-05T09:27:21Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Diverse Multi-Answer Retrieval with Determinantal Point Processes [11.925050407713597]
We propose a re-ranking based approach using Determinantal point processes utilizing BERT as kernels.
Results demonstrate that our re-ranking technique outperforms state-of-the-art method on the AmbigQA dataset.
arXiv Detail & Related papers (2022-11-29T08:54:05Z) - Answering Open-Domain Questions of Varying Reasoning Steps from Text [39.48011017748654]
We develop a unified system to answer directly from text open-domain questions.
We employ a single multi-task transformer model to perform all the necessary subtasks.
We show that our model demonstrates competitive performance on both existing benchmarks and this new benchmark.
arXiv Detail & Related papers (2020-10-23T16:51:09Z) - Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval [117.07047313964773]
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions.
Our method does not require access to any corpus-specific information, such as inter-document hyperlinks or human-annotated entity markers.
Our system also yields a much better efficiency-accuracy trade-off, matching the best published accuracy on HotpotQA while being 10 times faster at inference time.
arXiv Detail & Related papers (2020-09-27T06:12:29Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z) - Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering [40.58976291178477]
We introduce a simple, fast, and unsupervised iterative evidence retrieval method.
Despite its simplicity, our approach outperforms all the previous methods on the evidence selection task.
When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance.
arXiv Detail & Related papers (2020-05-04T00:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.