Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge
- URL: http://arxiv.org/abs/2102.05474v1
- Date: Wed, 10 Feb 2021 15:00:12 GMT
- Title: Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge
- Authors: Zhuosheng Zhang, Junlong Li, Hai Zhao
- Abstract summary: Multi-turn dialogue reading comprehension aims to teach machines to read dialogue contexts and solve tasks such as response selection and answering questions.
This work makes the first attempt to tackle the above two challenges by extracting substantially important turns as pivot utterances.
We propose a pivot-oriented deep selection model (PoDS) on top of the Transformer-based language models for dialogue comprehension.
- Score: 43.352833140317486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-turn dialogue reading comprehension aims to teach machines to read
dialogue contexts and solve tasks such as response selection and answering
questions. The major challenges involve noisy history contexts and especial
prerequisites of commonsense knowledge that is unseen in the given material.
Existing works mainly focus on context and response matching approaches. This
work thus makes the first attempt to tackle the above two challenges by
extracting substantially important turns as pivot utterances and utilizing
external knowledge to enhance the representation of context. We propose a
pivot-oriented deep selection model (PoDS) on top of the Transformer-based
language models for dialogue comprehension. In detail, our model first picks
out the pivot utterances from the conversation history according to the
semantic matching with the candidate response or question, if any. Besides,
knowledge items related to the dialogue context are extracted from a knowledge
graph as external knowledge. Then, the pivot utterances and the external
knowledge are combined with a well-designed mechanism for refining predictions.
Experimental results on four dialogue comprehension benchmark tasks show that
our proposed model achieves great improvements on baselines. A series of
empirical comparisons are conducted to show how our selection strategies and
the extra knowledge injection influence the results.
Related papers
- Improve Retrieval-based Dialogue System via Syntax-Informed Attention [46.79601705850277]
We propose SIA, Syntax-Informed Attention, considering both intra- and inter-sentence syntax information.
We evaluate our method on three widely used benchmarks and experimental results demonstrate the general superiority of our method on dialogue response selection.
arXiv Detail & Related papers (2023-03-12T08:14:16Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - Topic-Aware Response Generation in Task-Oriented Dialogue with
Unstructured Knowledge Access [20.881612071473118]
We propose Topic-Aware Response Generation'' (TARG) to better integrate topical information in task-oriented dialogue.
TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources.
arXiv Detail & Related papers (2022-12-10T22:32:28Z) - Who says like a style of Vitamin: Towards Syntax-Aware
DialogueSummarization using Multi-task Learning [2.251583286448503]
We focus on the association between utterances from individual speakers and unique syntactic structures.
Speakers have unique textual styles that can contain linguistic information, such as voiceprint.
We employ multi-task learning of both syntax-aware information and dialogue summarization.
arXiv Detail & Related papers (2021-09-29T05:30:39Z) - Retrieval-Free Knowledge-Grounded Dialogue Response Generation with
Adapters [52.725200145600624]
We propose KnowExpert to bypass the retrieval process by injecting prior knowledge into the pre-trained language models with lightweight adapters.
Experimental results show that KnowExpert performs comparably with the retrieval-based baselines.
arXiv Detail & Related papers (2021-05-13T12:33:23Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Knowledgeable Dialogue Reading Comprehension on Key Turns [84.1784903043884]
Multi-choice machine reading comprehension (MRC) requires models to choose the correct answer from candidate options given a passage and a question.
Our research focuses dialogue-based MRC, where the passages are multi-turn dialogues.
It suffers from two challenges, the answer selection decision is made without support of latently helpful commonsense, and the multi-turn context may hide considerable irrelevant information.
arXiv Detail & Related papers (2020-04-29T07:04:43Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.