Context-guided Triple Matching for Multiple Choice Question Answering
- URL: http://arxiv.org/abs/2109.12996v1
- Date: Mon, 27 Sep 2021 12:30:39 GMT
- Title: Context-guided Triple Matching for Multiple Choice Question Answering
- Authors: Xun Yao, Junlong Ma, Xinrong Hu, Junping Liu, Jie Yang, Wanqing Li
- Abstract summary: Multiple choice question answering (MCQA) refers to identifying a suitable answer from multiple candidates, by estimating the matching score among the triple of the passage, question and answer.
Existing methods decouple the process into several pair-wise or dual matching steps, that limited the ability of assessing cases with multiple evidence sentences.
This paper introduces a novel Context-guided Triple Matching algorithm, which is achieved by integrating a Triple Matching (TM) module and a Contrastive Regularization (CR)
- Score: 13.197150032345895
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The task of multiple choice question answering (MCQA) refers to identifying a
suitable answer from multiple candidates, by estimating the matching score
among the triple of the passage, question and answer. Despite the general
research interest in this regard, existing methods decouple the process into
several pair-wise or dual matching steps, that limited the ability of assessing
cases with multiple evidence sentences. To alleviate this issue, this paper
introduces a novel Context-guided Triple Matching algorithm, which is achieved
by integrating a Triple Matching (TM) module and a Contrastive Regularization
(CR). The former is designed to enumerate one component from the triple as the
background context, and estimate its semantic matching with the other two.
Additionally, the contrastive term is further proposed to capture the
dissimilarity between the correct answer and distractive ones. We validate the
proposed algorithm on several benchmarking MCQA datasets, which exhibits
competitive performances against state-of-the-arts.
Related papers
- Cross-Modal Coordination Across a Diverse Set of Input Modalities [0.0]
Cross-modal retrieval is the task of retrieving samples of a given modality by using queries of a different one.
This paper proposes two approaches to the problem: the first is based on an extension of the CLIP contrastive objective to an arbitrary number of input modalities.
The second departs from the contrastive formulation and tackles the coordination problem by regressing the cross-modal similarities towards a target.
arXiv Detail & Related papers (2024-01-29T17:53:25Z) - Diverse Multi-Answer Retrieval with Determinantal Point Processes [11.925050407713597]
We propose a re-ranking based approach using Determinantal point processes utilizing BERT as kernels.
Results demonstrate that our re-ranking technique outperforms state-of-the-art method on the AmbigQA dataset.
arXiv Detail & Related papers (2022-11-29T08:54:05Z) - Double Retrieval and Ranking for Accurate Question Answering [120.69820139008138]
We show that an answer verification step introduced in Transformer-based answer selection models can significantly improve the state of the art in Question Answering.
The results on three well-known datasets for AS2 show consistent and significant improvement of the state of the art.
arXiv Detail & Related papers (2022-01-16T06:20:07Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - A Simple Approach to Jointly Rank Passages and Select Relevant Sentences
in the OBQA Context [15.556928370682094]
How to select the relevant information from a large corpus is a crucial problem for reasoning and inference.
Many existing frameworks use a deep learning model to select relevant passages and then answer each question by matching a sentence in the corresponding passage.
We present a simple yet effective framework to address these problems by jointly ranking passages and selecting sentences.
arXiv Detail & Related papers (2021-09-22T03:11:17Z) - Joint Passage Ranking for Diverse Multi-Answer Retrieval [56.43443577137929]
We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a question.
This task requires joint modeling of retrieved passages, as models should not repeatedly retrieve passages containing the same answer at the cost of missing a different valid answer.
In this paper, we introduce JPR, a joint passage retrieval model focusing on reranking. To model the joint probability of the retrieved passages, JPR makes use of an autoregressive reranker that selects a sequence of passages, equipped with novel training and decoding algorithms.
arXiv Detail & Related papers (2021-04-17T04:48:36Z) - Generating Correct Answers for Progressive Matrices Intelligence Tests [88.78821060331582]
Raven's Progressive Matrices are multiple-choice intelligence tests, where one tries to complete the missing location in a $3times 3$ grid of abstract images.
Previous attempts to address this test have focused solely on selecting the right answer out of the multiple choices.
In this work, we focus, instead, on generating a correct answer given the grid, without seeing the choices, which is a harder task, by definition.
arXiv Detail & Related papers (2020-11-01T13:21:07Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering [40.58976291178477]
We introduce a simple, fast, and unsupervised iterative evidence retrieval method.
Despite its simplicity, our approach outperforms all the previous methods on the evidence selection task.
When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance.
arXiv Detail & Related papers (2020-05-04T00:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.