Learning to Select the Relevant History Turns in Conversational Question
Answering
- URL: http://arxiv.org/abs/2308.02294v1
- Date: Fri, 4 Aug 2023 12:59:39 GMT
- Title: Learning to Select the Relevant History Turns in Conversational Question
Answering
- Authors: Munazza Zaib and Wei Emma Zhang and Quan Z. Sheng and Subhash Sagar
and Adnan Mahmood and Yang Zhang
- Abstract summary: The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area.
We propose a framework, DHS-ConvQA, that first generates the context and question entities for all the history turns.
We demonstrate that selecting relevant turns works better than rewriting the original question.
- Score: 27.049444003555234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing demand for the web-based digital assistants has given a rapid
rise in the interest of the Information Retrieval (IR) community towards the
field of conversational question answering (ConvQA). However, one of the
critical aspects of ConvQA is the effective selection of conversational history
turns to answer the question at hand. The dependency between relevant history
selection and correct answer prediction is an intriguing but under-explored
area. The selected relevant context can better guide the system so as to where
exactly in the passage to look for an answer. Irrelevant context, on the other
hand, brings noise to the system, thereby resulting in a decline in the model's
performance. In this paper, we propose a framework, DHS-ConvQA (Dynamic History
Selection in Conversational Question Answering), that first generates the
context and question entities for all the history turns, which are then pruned
on the basis of similarity they share in common with the question at hand. We
also propose an attention-based mechanism to re-rank the pruned terms based on
their calculated weights of how useful they are in answering the question. In
the end, we further aid the model by highlighting the terms in the re-ranked
conversational history using a binary classification task and keeping the
useful terms (predicted as 1) and ignoring the irrelevant terms (predicted as
0). We demonstrate the efficacy of our proposed framework with extensive
experimental results on CANARD and QuAC -- the two popularly utilized datasets
in ConvQA. We demonstrate that selecting relevant turns works better than
rewriting the original question. We also investigate how adding the irrelevant
history turns negatively impacts the model's performance and discuss the
research challenges that demand more attention from the IR community.
Related papers
- Consistency Training by Synthetic Question Generation for Conversational Question Answering [14.211024633768986]
We augment historical information with synthetic questions to make the reasoning robust to irrelevant history.
This is the first instance of research using question generation as a form of data augmentation to model conversational QA settings.
arXiv Detail & Related papers (2024-04-17T06:49:14Z) - Selecting Query-bag as Pseudo Relevance Feedback for Information-seeking Conversations [76.70349332096693]
Information-seeking dialogue systems are widely used in e-commerce systems.
We propose a Query-bag based Pseudo Relevance Feedback framework (QB-PRF)
It constructs a query-bag with related queries to serve as pseudo signals to guide information-seeking conversations.
arXiv Detail & Related papers (2024-03-22T08:10:32Z) - Event Extraction as Question Generation and Answering [72.04433206754489]
Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
arXiv Detail & Related papers (2023-07-10T01:46:15Z) - Open-Domain Conversational Question Answering with Historical Answers [29.756094955426597]
This paper proposes ConvADR-QA that leverages historical answers to boost retrieval performance.
In our proposed framework, the retrievers use a teacher-student framework to reduce noises from previous turns.
Our experiments on the benchmark dataset, OR-QuAC, demonstrate that our model outperforms existing baselines in both extractive and generative reader settings.
arXiv Detail & Related papers (2022-11-17T08:20:57Z) - CoHS-CQG: Context and History Selection for Conversational Question
Generation [31.87967788600221]
We propose a two-stage CQG framework, which adopts a CoHS module to shorten the context and history of the input.
Our model achieves state-of-the-art performances on CoQA in both the answer-aware and answer-unaware settings.
arXiv Detail & Related papers (2022-09-14T13:58:52Z) - BERT-CoQAC: BERT-based Conversational Question Answering in Context [10.811729691130349]
We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
arXiv Detail & Related papers (2021-04-23T03:05:17Z) - A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering [52.041815783025186]
We propose a novel graph-guided retrieval method to model the relations among answers across conversation turns.
We also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
arXiv Detail & Related papers (2021-04-17T04:39:41Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - Knowledgeable Dialogue Reading Comprehension on Key Turns [84.1784903043884]
Multi-choice machine reading comprehension (MRC) requires models to choose the correct answer from candidate options given a passage and a question.
Our research focuses dialogue-based MRC, where the passages are multi-turn dialogues.
It suffers from two challenges, the answer selection decision is made without support of latently helpful commonsense, and the multi-turn context may hide considerable irrelevant information.
arXiv Detail & Related papers (2020-04-29T07:04:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.