Context-based Transformer Models for Answer Sentence Selection
- URL: http://arxiv.org/abs/2006.01285v1
- Date: Mon, 1 Jun 2020 21:52:19 GMT
- Title: Context-based Transformer Models for Answer Sentence Selection
- Authors: Ivano Lauriola and Alessandro Moschitti
- Abstract summary: In this paper, we analyze the role of the contextual information in the sentence selection task.
We propose a Transformer based architecture that leverages two types of contexts, local and global.
The results show that the combination of local and global contexts in a Transformer model significantly improves the accuracy in Answer Sentence Selection.
- Score: 109.96739477808134
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: An important task for the design of Question Answering systems is the
selection of the sentence containing (or constituting) the answer from
documents relevant to the asked question. Most previous work has only used the
target sentence to compute its score with the question as the models were not
powerful enough to also effectively encode additional contextual information.
In this paper, we analyze the role of the contextual information in the
sentence selection task, proposing a Transformer based architecture that
leverages two types of contexts, local and global. The former describes the
paragraph containing the sentence, aiming at solving implicit references,
whereas the latter describes the entire document containing the candidate
sentence, providing content-based information. The results on three different
benchmarks show that the combination of local and global contexts in a
Transformer model significantly improves the accuracy in Answer Sentence
Selection.
Related papers
- Question-Context Alignment and Answer-Context Dependencies for Effective
Answer Sentence Selection [38.661155271311515]
We propose to improve the candidate scoring by explicitly incorporating the dependencies between question-context and answer-context into the final representation of a candidate.
Our proposed model achieves significant improvements on popular AS2 benchmarks, i.e., WikiQA and WDRASS, obtaining new state-of-the-art on all benchmarks.
arXiv Detail & Related papers (2023-06-03T20:59:19Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Pre-training Transformer Models with Sentence-Level Objectives for
Answer Sentence Selection [99.59693674455582]
We propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents.
Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models.
arXiv Detail & Related papers (2022-05-20T22:39:00Z) - Paragraph-based Transformer Pre-training for Multi-Sentence Inference [99.59693674455582]
We show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks.
We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences.
arXiv Detail & Related papers (2022-05-02T21:41:14Z) - Question rewriting? Assessing its importance for conversational question
answering [0.6449761153631166]
This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task.
In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components.
Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.
arXiv Detail & Related papers (2022-01-22T23:31:25Z) - Text Simplification for Comprehension-based Question-Answering [7.144235435987265]
We release Simple-SQuAD, a simplified version of the widely-used SQuAD dataset.
We benchmark the newly created corpus and perform an ablation study for examining the effect of the simplification process in the SQuAD-based question answering task.
arXiv Detail & Related papers (2021-09-28T18:48:00Z) - Modeling Context in Answer Sentence Selection Systems on a Latency
Budget [87.45819843513598]
We present an approach to efficiently incorporate contextual information in AS2 models.
For each answer candidate, we first use unsupervised similarity techniques to extract relevant sentences from its source document.
Our best approach, which leverages a multi-way attention architecture to efficiently encode context, improves 6% to 11% over nonanswer state of the art in AS2 with minimal impact on system latency.
arXiv Detail & Related papers (2021-01-28T16:24:48Z) - Dynamic Context Selection for Document-level Neural Machine Translation
via Reinforcement Learning [55.18886832219127]
We propose an effective approach to select dynamic context for document-level translation.
A novel reward is proposed to encourage the selection and utilization of dynamic context sentences.
Experiments demonstrate that our approach can select adaptive context sentences for different source sentences.
arXiv Detail & Related papers (2020-10-09T01:05:32Z) - Word Embedding-based Text Processing for Comprehensive Summarization and
Distinct Information Extraction [1.552282932199974]
We propose two automated text processing frameworks specifically designed to analyze online reviews.
The first framework is to summarize the reviews dataset by extracting essential sentence.
The second framework is based on a question-answering neural network model trained to extract answers to multiple different questions.
arXiv Detail & Related papers (2020-04-21T02:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.