BERT-CoQAC: BERT-based Conversational Question Answering in Context
- URL: http://arxiv.org/abs/2104.11394v1
- Date: Fri, 23 Apr 2021 03:05:17 GMT
- Title: BERT-CoQAC: BERT-based Conversational Question Answering in Context
- Authors: Munazza Zaib and Dai Hoang Tran and Subhash Sagar and Adnan Mahmood
and Wei E. Zhang and Quan Z. Sheng
- Abstract summary: We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
- Score: 10.811729691130349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As one promising way to inquire about any particular information through a
dialog with the bot, question answering dialog systems have gained increasing
research interests recently. Designing interactive QA systems has always been a
challenging task in natural language processing and used as a benchmark to
evaluate a machine's ability of natural language understanding. However, such
systems often struggle when the question answering is carried out in multiple
turns by the users to seek more information based on what they have already
learned, thus, giving rise to another complicated form called Conversational
Question Answering (CQA). CQA systems are often criticized for not
understanding or utilizing the previous context of the conversation when
answering the questions. To address the research gap, in this paper, we explore
how to integrate conversational history into the neural machine comprehension
system. On one hand, we introduce a framework based on a publically available
pre-trained language model called BERT for incorporating history turns into the
system. On the other hand, we propose a history selection mechanism that
selects the turns that are relevant and contributes the most to answer the
current question. Experimentation results revealed that our framework is
comparable in performance with the state-of-the-art models on the QuAC leader
board. We also conduct a number of experiments to show the side effects of
using entire context information which brings unnecessary information and noise
signals resulting in a decline in the model's performance.
Related papers
- History-Aware Conversational Dense Retrieval [31.203399110612388]
We propose a History-Aware Conversational Dense Retrieval (HAConvDR) system, which incorporates two ideas: context-denoised query reformulation and automatic mining of supervision signals.
Experiments on two public conversational search datasets demonstrate the improved history modeling capability of HAConvDR.
arXiv Detail & Related papers (2024-01-30T01:24:18Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Learning to Select the Relevant History Turns in Conversational Question
Answering [27.049444003555234]
The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area.
We propose a framework, DHS-ConvQA, that first generates the context and question entities for all the history turns.
We demonstrate that selecting relevant turns works better than rewriting the original question.
arXiv Detail & Related papers (2023-08-04T12:59:39Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - Question rewriting? Assessing its importance for conversational question
answering [0.6449761153631166]
This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task.
In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components.
Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.
arXiv Detail & Related papers (2022-01-22T23:31:25Z) - Smoothing Dialogue States for Open Conversational Machine Reading [70.83783364292438]
We propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation.
Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
arXiv Detail & Related papers (2021-08-28T08:04:28Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Knowledgeable Dialogue Reading Comprehension on Key Turns [84.1784903043884]
Multi-choice machine reading comprehension (MRC) requires models to choose the correct answer from candidate options given a passage and a question.
Our research focuses dialogue-based MRC, where the passages are multi-turn dialogues.
It suffers from two challenges, the answer selection decision is made without support of latently helpful commonsense, and the multi-turn context may hide considerable irrelevant information.
arXiv Detail & Related papers (2020-04-29T07:04:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.