Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog
- URL: http://arxiv.org/abs/2004.11019v3
- Date: Thu, 11 Jun 2020 13:20:43 GMT
- Title: Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog
- Authors: Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
- Abstract summary: We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
- Score: 70.79442700890843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown remarkable success in end-to-end task-oriented
dialog system. However, most neural models rely on large training data, which
are only available for a certain number of task domains, such as navigation and
scheduling.
This makes it difficult to scalable for a new domain with limited labeled
data. However, there has been relatively little research on how to effectively
use data from all domains to improve the performance of each domain and also
unseen domains. To this end, we investigate methods that can make explicit use
of domain knowledge and introduce a shared-private network to learn shared and
specific knowledge. In addition, we propose a novel Dynamic Fusion Network
(DF-Net) which automatically exploit the relevance between the target domain
and each domain. Results show that our model outperforms existing methods on
multi-domain dialogue, giving the state-of-the-art in the literature. Besides,
with little training data, we show its transferability by outperforming prior
best model by 13.9\% on average.
Related papers
- Deep Domain Specialisation for single-model multi-domain learning to rank [1.534667887016089]
Training multiple models comes at a higher cost to train, maintain and update compared to having only a single model responsible for all domains.
We propose a novel architecture of Deep Domain Specialisation (DDS) to consolidate multiple domains into a single model.
arXiv Detail & Related papers (2024-07-01T08:19:19Z) - A Unified Data Augmentation Framework for Low-Resource Multi-Domain Dialogue Generation [52.0964459842176]
Current state-of-the-art dialogue systems heavily rely on extensive training datasets.
We propose a novel data textbfAugmentation framework for textbfMulti-textbfDomain textbfDialogue textbfGeneration, referred to as textbfAMD$2$G.
The AMD$2$G framework consists of a data augmentation process and a two-stage training approach: domain-agnostic training and domain adaptation training.
arXiv Detail & Related papers (2024-06-14T09:52:27Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval [55.122020263319634]
Video moment retrieval (VMR) aims to localize the target moment from an untrimmed video according to a given language query.
In this paper, we focus on a novel task: cross-domain VMR, where fully-annotated datasets are available in one domain but the domain of interest only contains unannotated datasets.
We propose a novel Multi-Modal Cross-Domain Alignment network to transfer the annotation knowledge from the source domain to the target domain.
arXiv Detail & Related papers (2022-09-23T12:58:20Z) - Multi-Domain Incremental Learning for Semantic Segmentation [42.30646442211311]
We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
arXiv Detail & Related papers (2021-10-23T12:21:42Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Latent Domain Learning with Dynamic Residual Adapters [26.018759356470767]
A practical shortcoming of deep neural networks is their specialization to a single task and domain.
Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations.
We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains.
arXiv Detail & Related papers (2020-06-01T15:00:11Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.