Knowledgeable Dialogue Reading Comprehension on Key Turns
- URL: http://arxiv.org/abs/2004.13988v2
- Date: Thu, 10 Sep 2020 16:52:47 GMT
- Title: Knowledgeable Dialogue Reading Comprehension on Key Turns
- Authors: Junlong Li, Zhuosheng Zhang, Hai Zhao
- Abstract summary: Multi-choice machine reading comprehension (MRC) requires models to choose the correct answer from candidate options given a passage and a question.
Our research focuses dialogue-based MRC, where the passages are multi-turn dialogues.
It suffers from two challenges, the answer selection decision is made without support of latently helpful commonsense, and the multi-turn context may hide considerable irrelevant information.
- Score: 84.1784903043884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-choice machine reading comprehension (MRC) requires models to choose
the correct answer from candidate options given a passage and a question. Our
research focuses dialogue-based MRC, where the passages are multi-turn
dialogues. It suffers from two challenges, the answer selection decision is
made without support of latently helpful commonsense, and the multi-turn
context may hide considerable irrelevant information. This work thus makes the
first attempt to tackle those two challenges by extracting substantially
important turns and utilizing external knowledge to enhance the representation
of context. In this paper, the relevance of each turn to the question are
calculated to choose key turns. Besides, terms related to the context and the
question in a knowledge graph are extracted as external knowledge. The original
context, question and external knowledge are encoded with the pre-trained
language model, then the language representation and key turns are combined
together with a will-designed mechanism to predict the answer. Experimental
results on a DREAM dataset show that our proposed model achieves great
improvements on baselines.
Related papers
- Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering [11.183845003492964]
We use Passage Retrieval (DPR) to retrieve related knowledge to help the model answer questions.
DPR conduct retrieving in natural language space, which may not ensure comprehensive acquisition of image information.
We propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions.
arXiv Detail & Related papers (2024-04-22T07:44:20Z) - Question-Interlocutor Scope Realized Graph Modeling over Key Utterances
for Dialogue Reading Comprehension [61.55950233402972]
We propose a new key utterances extracting method for dialogue reading comprehension.
It performs prediction on the unit formed by several contiguous utterances, which can realize more answer-contained utterances.
As a graph constructed on the text of utterances, we then propose Question-Interlocutor Scope Realized Graph (QuISG) modeling.
arXiv Detail & Related papers (2022-10-26T04:00:42Z) - Question rewriting? Assessing its importance for conversational question
answering [0.6449761153631166]
This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task.
In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components.
Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.
arXiv Detail & Related papers (2022-01-22T23:31:25Z) - Smoothing Dialogue States for Open Conversational Machine Reading [70.83783364292438]
We propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation.
Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
arXiv Detail & Related papers (2021-08-28T08:04:28Z) - BERT-CoQAC: BERT-based Conversational Question Answering in Context [10.811729691130349]
We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
arXiv Detail & Related papers (2021-04-23T03:05:17Z) - Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge [43.352833140317486]
Multi-turn dialogue reading comprehension aims to teach machines to read dialogue contexts and solve tasks such as response selection and answering questions.
This work makes the first attempt to tackle the above two challenges by extracting substantially important turns as pivot utterances.
We propose a pivot-oriented deep selection model (PoDS) on top of the Transformer-based language models for dialogue comprehension.
arXiv Detail & Related papers (2021-02-10T15:00:12Z) - Reference Knowledgeable Network for Machine Reading Comprehension [43.352833140317486]
Multi-choice Machine Reading (MRC) is a major and challenging form of MRC tasks.
We propose a novel reference-based knowledge enhancement model based on span extraction called Reference Knowledgeable Network (RekNet)
In detail, RekNet refines fine-grained critical information and defines it as Reference Span, then quotes external knowledge quadruples by the co-occurrence information of Reference Span and answer options.
arXiv Detail & Related papers (2020-12-07T14:11:33Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.