Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization
- URL: http://arxiv.org/abs/2004.14871v2
- Date: Sun, 28 Nov 2021 13:57:51 GMT
- Title: Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization
- Authors: Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
- Abstract summary: Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
- Score: 78.93669377251396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spoken language understanding has been addressed as a supervised learning
problem, where a set of training data is available for each domain. However,
annotating data for each domain is both financially costly and non-scalable so
we should fully utilize information across all domains. One existing approach
solves the problem by conducting multi-domain learning, using shared parameters
for joint training across domains. We propose to improve the parameterization
of this method by using domain-specific and task-specific model parameters to
improve knowledge learning and transfer. Experiments on 5 domains show that our
model is more effective for multi-domain SLU and obtain the best results. In
addition, we show its transferability by outperforming the prior best model by
12.4\% when adapting to a new domain with little data.
Related papers
- A Collaborative Transfer Learning Framework for Cross-domain
Recommendation [12.880177078884927]
In the recommendation systems, there are multiple business domains to meet the diverse interests and needs of users.
We propose the Collaborative Cross-Domain Transfer Learning Framework (CCTL) to overcome these challenges.
CCTL evaluates the information gain of the source domain on the target domain using a symmetric companion network.
arXiv Detail & Related papers (2023-06-26T09:43:58Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Using Language to Extend to Unseen Domains [81.37175826824625]
It is expensive to collect training data for every possible domain that a vision model may encounter when deployed.
We consider how simply verbalizing the training domain as well as domains we want to extend to but do not have data for can improve robustness.
Using a multimodal model with a joint image and language embedding space, our method LADS learns a transformation of the image embeddings from the training domain to each unseen test domain.
arXiv Detail & Related papers (2022-10-18T01:14:02Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Multi-Domain Incremental Learning for Semantic Segmentation [42.30646442211311]
We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
arXiv Detail & Related papers (2021-10-23T12:21:42Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z) - Not all domains are equally complex: Adaptive Multi-Domain Learning [98.25886129591974]
We propose an adaptive parameterization approach to deep neural networks for multi-domain learning.
The proposed approach performs on par with the original approach while reducing by far the number of parameters.
arXiv Detail & Related papers (2020-03-25T17:16:00Z) - Unified Multi-Domain Learning and Data Imputation using Adversarial
Autoencoder [5.933303832684138]
We present a novel framework that can combine multi-domain learning (MDL), data imputation (DI) and multi-task learning (MTL)
The core of our method is an adversarial autoencoder that can: (1) learn to produce domain-invariant embeddings to reduce the difference between domains; (2) learn the data distribution for each domain and correctly perform data imputation on missing data.
arXiv Detail & Related papers (2020-03-15T19:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.