Continual Learning in Task-Oriented Dialogue Systems
- URL: http://arxiv.org/abs/2012.15504v1
- Date: Thu, 31 Dec 2020 08:44:25 GMT
- Title: Continual Learning in Task-Oriented Dialogue Systems
- Authors: Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul
Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Zhiguang Wang
- Abstract summary: Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.
We propose a continual learning benchmark for task-oriented dialogue systems with 37 domains to be learned continuously in four settings.
- Score: 49.35627673523519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning in task-oriented dialogue systems can allow us to add new
domains and functionalities through time without incurring the high cost of a
whole system retraining. In this paper, we propose a continual learning
benchmark for task-oriented dialogue systems with 37 domains to be learned
continuously in four settings, such as intent recognition, state tracking,
natural language generation, and end-to-end. Moreover, we implement and compare
multiple existing continual learning baselines, and we propose a simple yet
effective architectural method based on residual adapters. Our experiments
demonstrate that the proposed architectural method and a simple replay-based
strategy perform comparably well but they both achieve inferior performance to
the multi-task learning baseline, in where all the data are shown at once,
showing that continual learning in task-oriented dialogue systems is a
challenging task. Furthermore, we reveal several trade-offs between different
continual learning methods in term of parameter usage and memory size, which
are important in the design of a task-oriented dialogue system. The proposed
benchmark is released together with several baselines to promote more research
in this direction.
Related papers
- Continual Dialogue State Tracking via Example-Guided Question Answering [48.31523413835549]
We propose reformulating dialogue state tracking as a bundle of granular example-guided question answering tasks.
Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example.
We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes.
arXiv Detail & Related papers (2023-05-23T06:15:43Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z) - Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory [13.469140432108151]
We propose a retrieve-and-memorize framework to enhance the learning of system actions.
We use a memory-augmented multi-decoder network to generate the system actions conditioned on the candidate actions.
Our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.
arXiv Detail & Related papers (2021-06-04T07:53:56Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z) - Recent Advances and Challenges in Task-oriented Dialog System [63.82055978899631]
Task-oriented dialog systems are attracting more and more attention in academic and industrial communities.
We discuss three critical topics for task-oriented dialog systems: (1) improving data efficiency to facilitate dialog modeling in low-resource settings, (2) modeling multi-turn dynamics for dialog policy learning, and (3) integrating domain knowledge into the dialog model.
arXiv Detail & Related papers (2020-03-17T01:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.