Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog
Tasks
- URL: http://arxiv.org/abs/2110.15724v1
- Date: Sun, 10 Oct 2021 15:27:45 GMT
- Title: Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog
Tasks
- Authors: Janarthanan Rajendran, Jonathan K. Kummerfeld, Satinder Singh
- Abstract summary: We show that we can use only a small amount of data, supplemented with data from a related dialog task.
We describe a meta-learning based method that selectively learns from the related dialog task data.
- Score: 33.77022912718379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For each goal-oriented dialog task of interest, large amounts of data need to
be collected for end-to-end learning of a neural dialog system. Collecting that
data is a costly and time-consuming process. Instead, we show that we can use
only a small amount of data, supplemented with data from a related dialog task.
Naively learning from related data fails to improve performance as the related
data can be inconsistent with the target task. We describe a meta-learning
based method that selectively learns from the related dialog task data. Our
approach leads to significant accuracy improvements in an example dialog task.
Related papers
- Discovering Customer-Service Dialog System with Semi-Supervised Learning
and Coarse-to-Fine Intent Detection [6.869753194843482]
Task-oriented dialog aims to assist users in achieving specific goals through multi-turn conversation.
We constructed a weakly supervised dataset based on a teacher/student paradigm.
We also built a modular dialogue system and integrated coarse-to-fine grained classification for user intent detection.
arXiv Detail & Related papers (2022-12-23T14:36:43Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - Weakly Supervised Data Augmentation Through Prompting for Dialogue
Understanding [103.94325597273316]
We present a novel approach that iterates on augmentation quality by applying weakly-supervised filters.
We evaluate our methods on the emotion and act classification tasks in DailyDialog and the intent classification task in Facebook Multilingual Task-Oriented Dialogue.
For DailyDialog specifically, using 10% of the ground truth data we outperform the current state-of-the-art model which uses 100% of the data.
arXiv Detail & Related papers (2022-10-25T17:01:30Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - DialogZoo: Large-Scale Dialog-Oriented Task Learning [52.18193690394549]
We aim to build a unified foundation model which can solve massive diverse dialogue tasks.
To achieve this goal, we first collect a large-scale well-labeled dialogue dataset from 73 publicly available datasets.
arXiv Detail & Related papers (2022-05-25T11:17:16Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.