Advances in Multi-turn Dialogue Comprehension: A Survey
- URL: http://arxiv.org/abs/2103.03125v1
- Date: Thu, 4 Mar 2021 15:50:17 GMT
- Title: Advances in Multi-turn Dialogue Comprehension: A Survey
- Authors: Zhuosheng Zhang and Hai Zhao
- Abstract summary: We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
- Score: 51.215629336320305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training machines to understand natural language and interact with humans is
an elusive and essential task in the field of artificial intelligence. In
recent years, a diversity of dialogue systems has been designed with the rapid
development of deep learning researches, especially the recent pre-trained
language models. Among these studies, the fundamental yet challenging part is
dialogue comprehension whose role is to teach the machines to read and
comprehend the dialogue context before responding. In this paper, we review the
previous methods from the perspective of dialogue modeling. We summarize the
characteristics and challenges of dialogue comprehension in contrast to
plain-text reading comprehension. Then, we discuss three typical patterns of
dialogue modeling that are widely-used in dialogue comprehension tasks such as
response selection and conversation question-answering, as well as
dialogue-related language modeling techniques to enhance PrLMs in dialogue
scenarios. Finally, we highlight the technical advances in recent years and
point out the lessons we can learn from the empirical analysis and the
prospects towards a new frontier of researches.
Related papers
- FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for
Task-Oriented Dialogue [20.79359173822053]
We propose a novel dialogue pre-training model, FutureTOD, which distills future knowledge to the representation of the previous dialogue context.
Our intuition is that a good dialogue representation both learns local context information and predicts future information.
arXiv Detail & Related papers (2023-06-17T10:40:07Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension [42.57581945778631]
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing.
We propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization.
We show that our STRUDEL dialogue comprehension model can significantly improve the dialogue comprehension performance of transformer encoder language models.
arXiv Detail & Related papers (2022-12-24T04:39:54Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence.
This paper reviews the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios.
arXiv Detail & Related papers (2021-10-11T03:52:37Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - DialogLM: Pre-trained Model for Long Dialogue Understanding and
Summarization [19.918194137007653]
We present a pre-training framework for long dialogue understanding and summarization.
Considering the nature of long conversations, we propose a window-based denoising approach for generative pre-training.
We conduct extensive experiments on five datasets of long dialogues, covering tasks of dialogue summarization, abstractive question answering and topic segmentation.
arXiv Detail & Related papers (2021-09-06T13:55:03Z) - Utterance-level Dialogue Understanding: An Empirical Study [43.35258958775454]
This paper explores and quantify the role of context for different aspects of a dialogue.
Specifically, we employ various perturbations to distort the context of a given utterance.
This provides us with insights into the fundamental contextual controlling factors of different aspects of a dialogue.
arXiv Detail & Related papers (2020-09-29T09:50:21Z) - Recent Advances and Challenges in Task-oriented Dialog System [63.82055978899631]
Task-oriented dialog systems are attracting more and more attention in academic and industrial communities.
We discuss three critical topics for task-oriented dialog systems: (1) improving data efficiency to facilitate dialog modeling in low-resource settings, (2) modeling multi-turn dynamics for dialog policy learning, and (3) integrating domain knowledge into the dialog model.
arXiv Detail & Related papers (2020-03-17T01:34:56Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.