Question-Context Alignment and Answer-Context Dependencies for Effective
Answer Sentence Selection
- URL: http://arxiv.org/abs/2306.02196v1
- Date: Sat, 3 Jun 2023 20:59:19 GMT
- Title: Question-Context Alignment and Answer-Context Dependencies for Effective
Answer Sentence Selection
- Authors: Minh Van Nguyen, Kishan KC, Toan Nguyen, Thien Huu Nguyen, Ankit
Chadha, Thuy Vu
- Abstract summary: We propose to improve the candidate scoring by explicitly incorporating the dependencies between question-context and answer-context into the final representation of a candidate.
Our proposed model achieves significant improvements on popular AS2 benchmarks, i.e., WikiQA and WDRASS, obtaining new state-of-the-art on all benchmarks.
- Score: 38.661155271311515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answer sentence selection (AS2) in open-domain question answering finds
answer for a question by ranking candidate sentences extracted from web
documents. Recent work exploits answer context, i.e., sentences around a
candidate, by incorporating them as additional input string to the Transformer
models to improve the correctness scoring. In this paper, we propose to improve
the candidate scoring by explicitly incorporating the dependencies between
question-context and answer-context into the final representation of a
candidate. Specifically, we use Optimal Transport to compute the question-based
dependencies among sentences in the passage where the answer is extracted from.
We then represent these dependencies as edges in a graph and use Graph
Convolutional Network to derive the representation of a candidate, a node in
the graph. Our proposed model achieves significant improvements on popular AS2
benchmarks, i.e., WikiQA and WDRASS, obtaining new state-of-the-art on all
benchmarks.
Related papers
- Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Open-Domain Conversational Question Answering with Historical Answers [29.756094955426597]
This paper proposes ConvADR-QA that leverages historical answers to boost retrieval performance.
In our proposed framework, the retrievers use a teacher-student framework to reduce noises from previous turns.
Our experiments on the benchmark dataset, OR-QuAC, demonstrate that our model outperforms existing baselines in both extractive and generative reader settings.
arXiv Detail & Related papers (2022-11-17T08:20:57Z) - Better Query Graph Selection for Knowledge Base Question Answering [2.367061689316429]
This paper presents a novel approach based on semantic parsing to improve the performance of Knowledge Base Question Answering (KBQA)
Specifically, we focus on how to select an optimal query graph from a candidate set so as to retrieve the answer from knowledge base (KB)
arXiv Detail & Related papers (2022-04-27T01:53:06Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - In Situ Answer Sentence Selection at Web-scale [120.69820139008138]
Passage-based Extracting Answer Sentence In-place (PEASI) is a novel design for AS2 optimized for Web-scale setting.
We train PEASI in a multi-task learning framework that encourages feature sharing between the components: passage reranker and passage-based answer sentence extractor.
Experiments show PEASI effectively outperforms the current state-of-the-art setting for AS2, i.e., a point-wise model for ranking sentences independently, by 6.51% in accuracy.
arXiv Detail & Related papers (2022-01-16T06:36:00Z) - Answer Generation for Retrieval-based Question Answering Systems [80.28727681633096]
We train a sequence to sequence transformer model to generate an answer from a candidate set.
Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.
arXiv Detail & Related papers (2021-06-02T05:45:49Z) - Context-based Transformer Models for Answer Sentence Selection [109.96739477808134]
In this paper, we analyze the role of the contextual information in the sentence selection task.
We propose a Transformer based architecture that leverages two types of contexts, local and global.
The results show that the combination of local and global contexts in a Transformer model significantly improves the accuracy in Answer Sentence Selection.
arXiv Detail & Related papers (2020-06-01T21:52:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.