Regularizing Dialogue Generation by Imitating Implicit Scenarios
- URL: http://arxiv.org/abs/2010.01893v2
- Date: Tue, 6 Oct 2020 05:51:09 GMT
- Title: Regularizing Dialogue Generation by Imitating Implicit Scenarios
- Authors: Shaoxiong Feng, Xuancheng Ren, Hongshen Chen, Bin Sun, Kan Li, Xu Sun
- Abstract summary: We improve generative dialogue systems by taking into account dialogue history and future conversation.
The conventional dialogue model that has no access to future conversations is effectively regularized.
Our approach significantly outperforms state-of-the-art baselines on diversity and relevance.
- Score: 38.22638543470511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human dialogues are scenario-based and appropriate responses generally relate
to the latent context knowledge entailed by the specific scenario. To enable
responses that are more meaningful and context-specific, we propose to improve
generative dialogue systems from the scenario perspective, where both dialogue
history and future conversation are taken into account to implicitly
reconstruct the scenario knowledge. More importantly, the conversation
scenarios are further internalized using imitation learning framework, where
the conventional dialogue model that has no access to future conversations is
effectively regularized by transferring the scenario knowledge contained in
hierarchical supervising signals from the scenario-based dialogue model, so
that the future conversation is not required in actual inference. Extensive
evaluations show that our approach significantly outperforms state-of-the-art
baselines on diversity and relevance, and expresses scenario-specific
knowledge.
Related papers
- FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for
Task-Oriented Dialogue [20.79359173822053]
We propose a novel dialogue pre-training model, FutureTOD, which distills future knowledge to the representation of the previous dialogue context.
Our intuition is that a good dialogue representation both learns local context information and predicts future information.
arXiv Detail & Related papers (2023-06-17T10:40:07Z) - Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches
and Future Directions [14.85592662663867]
This survey provides a comprehensive investigation on existing work for abstractive dialogue summarization from scenarios.
It categorizes the task into two broad categories according to the type of input dialogues, i.e., open-domain and task-oriented.
It presents a taxonomy of existing techniques in three directions, namely, injecting dialogue features, designing auxiliary training tasks and using additional data.
arXiv Detail & Related papers (2022-10-18T14:33:03Z) - Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State Tracking [5.816391291790977]
Dialogue state tracking (DST) aims to track human-machine interactions and generate state representations for managing the dialogue.
Recent advances in machine reading comprehension predict both categorical and non-categorical types of slots for dialogue state tracking.
We formulate and incorporate dialogue acts, and leverage recent advances in machine reading comprehension to predict both categorical and non-categorical types of slots for dialogue state tracking.
arXiv Detail & Related papers (2022-08-04T05:18:30Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - Precognition in Task-oriented Dialogue Understanding: Posterior
Regularization by Future Context [8.59600111891194]
We propose to jointly model historical and future information through the posterior regularization method.
We optimize the KL distance between these to regularize our model during training.
Experiments on two dialogue datasets validate the effectiveness of our proposed method.
arXiv Detail & Related papers (2022-03-07T09:58:50Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.