A Simple Approach to Jointly Rank Passages and Select Relevant Sentences
in the OBQA Context
- URL: http://arxiv.org/abs/2109.10497v1
- Date: Wed, 22 Sep 2021 03:11:17 GMT
- Title: A Simple Approach to Jointly Rank Passages and Select Relevant Sentences
in the OBQA Context
- Authors: Man Luo, Shuguang Chen, Chitta Baral
- Abstract summary: How to select the relevant information from a large corpus is a crucial problem for reasoning and inference.
Many existing frameworks use a deep learning model to select relevant passages and then answer each question by matching a sentence in the corresponding passage.
We present a simple yet effective framework to address these problems by jointly ranking passages and selecting sentences.
- Score: 15.556928370682094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the open question answering (OBQA) task, how to select the relevant
information from a large corpus is a crucial problem for reasoning and
inference. Some datasets (e.g, HotpotQA) mainly focus on testing the model's
reasoning ability at the sentence level. To overcome this challenge, many
existing frameworks use a deep learning model to select relevant passages and
then answer each question by matching a sentence in the corresponding passage.
However, such frameworks require long inference time and fail to take advantage
of the relationship between passages and sentences. In this work, we present a
simple yet effective framework to address these problems by jointly ranking
passages and selecting sentences. We propose consistency and similarity
constraints to promote the correlation and interaction between passage ranking
and sentence selection. In our experiments, we demonstrate that our framework
can achieve competitive results and outperform the baseline by 28\% in terms of
exact matching of relevant sentences on the HotpotQA dataset.
Related papers
- QUDSELECT: Selective Decoding for Questions Under Discussion Parsing [90.92351108691014]
Question Under Discussion (QUD) is a discourse framework that uses implicit questions to reveal discourse relationships between sentences.
We introduce QUDSELECT, a joint-training framework that selectively decodes the QUD dependency structures considering the QUD criteria.
Our method outperforms the state-of-the-art baseline models by 9% in human evaluation and 4% in automatic evaluation.
arXiv Detail & Related papers (2024-08-02T06:46:08Z) - Learning to Select the Relevant History Turns in Conversational Question
Answering [27.049444003555234]
The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area.
We propose a framework, DHS-ConvQA, that first generates the context and question entities for all the history turns.
We demonstrate that selecting relevant turns works better than rewriting the original question.
arXiv Detail & Related papers (2023-08-04T12:59:39Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Effective FAQ Retrieval and Question Matching With Unsupervised
Knowledge Injection [10.82418428209551]
We propose a contextual language model for retrieving appropriate answers to frequently asked questions.
We also explore to capitalize on domain-specific topically-relevant relations between words in an unsupervised manner.
We evaluate variants of our approach on a publicly-available Chinese FAQ dataset, and further apply and contextualize it to a large-scale question-matching task.
arXiv Detail & Related papers (2020-10-27T05:03:34Z) - A Wrong Answer or a Wrong Question? An Intricate Relationship between
Question Reformulation and Answer Selection in Conversational Question
Answering [15.355557454305776]
We show that question rewriting (QR) of the conversational context allows to shed more light on this phenomenon.
We present the results of this analysis on the TREC CAsT and QuAC (CANARD) datasets.
arXiv Detail & Related papers (2020-10-13T06:29:51Z) - Context Modeling with Evidence Filter for Multiple Choice Question
Answering [18.154792554957595]
Multiple-Choice Question Answering (MCQA) is a challenging task in machine reading comprehension.
The main challenge is to extract "evidence" from the given context that supports the correct answer.
Existing work tackles this problem by annotated evidence or distant supervision with rules which overly rely on human efforts.
We propose a simple yet effective approach termed evidence filtering to model the relationships between the encoded contexts.
arXiv Detail & Related papers (2020-10-06T11:53:23Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z) - Context-based Transformer Models for Answer Sentence Selection [109.96739477808134]
In this paper, we analyze the role of the contextual information in the sentence selection task.
We propose a Transformer based architecture that leverages two types of contexts, local and global.
The results show that the combination of local and global contexts in a Transformer model significantly improves the accuracy in Answer Sentence Selection.
arXiv Detail & Related papers (2020-06-01T21:52:19Z) - Query Focused Multi-Document Summarization with Distant Supervision [88.39032981994535]
Existing work relies heavily on retrieval-style methods for estimating the relevance between queries and text segments.
We propose a coarse-to-fine modeling framework which introduces separate modules for estimating whether segments are relevant to the query.
We demonstrate that our framework outperforms strong comparison systems on standard QFS benchmarks.
arXiv Detail & Related papers (2020-04-06T22:35:19Z) - Improving Multi-Turn Response Selection Models with Complementary
Last-Utterance Selection by Instance Weighting [84.9716460244444]
We consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals.
We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets.
arXiv Detail & Related papers (2020-02-18T06:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.