On Task-Level Dialogue Composition of Generative Transformer Model
- URL: http://arxiv.org/abs/2010.04826v1
- Date: Fri, 9 Oct 2020 22:10:03 GMT
- Title: On Task-Level Dialogue Composition of Generative Transformer Model
- Authors: Prasanna Parthasarathi and Arvind Neelakantan and Sharan Narang
- Abstract summary: We study the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models.
To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss.
- Score: 9.751234480029765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented dialogue systems help users accomplish tasks such as booking a
movie ticket and ordering food via conversation. Generative models
parameterized by a deep neural network are widely used for next turn response
generation in such systems. It is natural for users of the system to want to
accomplish multiple tasks within the same conversation, but the ability of
generative models to compose multiple tasks is not well studied. In this work,
we begin by studying the effect of training human-human task-oriented dialogues
towards improving the ability to compose multiple tasks on Transformer
generative models. To that end, we propose and explore two solutions: (1)
creating synthetic multiple task dialogue data for training from human-human
single task dialogue and (2) forcing the encoder representation to be invariant
to single and multiple task dialogues using an auxiliary loss. The results from
our experiments highlight the difficulty of even the sophisticated variant of
transformer model in learning to compose multiple tasks from single task
dialogues.
Related papers
- Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users [51.34484827552774]
We release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent.
These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios.
We propose a novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query.
arXiv Detail & Related papers (2023-10-31T14:12:07Z) - Unified Conversational Models with System-Initiated Transitions between
Chit-Chat and Task-Oriented Dialogues [4.714297769572548]
We investigate the potential initiative'' that occurs when there is a change between dialogue modes in one dialogue.
We contribute two efficient prompt models which can proactively generate a transition sentence to trigger system-initiated transitions.
arXiv Detail & Related papers (2023-07-04T11:53:23Z) - DialogZoo: Large-Scale Dialog-Oriented Task Learning [52.18193690394549]
We aim to build a unified foundation model which can solve massive diverse dialogue tasks.
To achieve this goal, we first collect a large-scale well-labeled dialogue dataset from 73 publicly available datasets.
arXiv Detail & Related papers (2022-05-25T11:17:16Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - SYNERGY: Building Task Bots at Scale Using Symbolic Knowledge and
Machine Teaching [75.87418236410296]
SYNERGY is a hybrid learning framework where a task bot is developed in two steps.
A pre-trained neural dialog model, SOLOIST, is fine-tuned on the simulated dialogs to build a bot for the task.
The fine-tuned neural dialog model is continually refined with a handful of real task-specific dialogs via machine teaching.
arXiv Detail & Related papers (2021-10-21T23:13:04Z) - Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory [13.469140432108151]
We propose a retrieve-and-memorize framework to enhance the learning of system actions.
We use a memory-augmented multi-decoder network to generate the system actions conditioned on the candidate actions.
Our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.
arXiv Detail & Related papers (2021-06-04T07:53:56Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z) - Multi-Domain Dialogue Acts and Response Co-Generation [34.27525685962274]
We propose a neural co-generation model that generates dialogue acts and responses concurrently.
Our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.
arXiv Detail & Related papers (2020-04-26T12:21:17Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.