Domain Adaptation in Dialogue Systems using Transfer and Meta-Learning
- URL: http://arxiv.org/abs/2102.11146v1
- Date: Mon, 22 Feb 2021 16:16:57 GMT
- Title: Domain Adaptation in Dialogue Systems using Transfer and Meta-Learning
- Authors: Rui Ribeiro, Alberto Abad and Jos\'e Lopes
- Abstract summary: Current generative-based dialogue systems fail to adapt to new unseen domains when only a small amount of target data is available.
We propose a method that adapts to unseen domains by combining both transfer and meta-learning.
- Score: 12.64591916699374
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current generative-based dialogue systems are data-hungry and fail to adapt
to new unseen domains when only a small amount of target data is available.
Additionally, in real-world applications, most domains are underrepresented, so
there is a need to create a system capable of generalizing to these domains
using minimal data. In this paper, we propose a method that adapts to unseen
domains by combining both transfer and meta-learning (DATML). DATML improves
the previous state-of-the-art dialogue model, DiKTNet, by introducing a
different learning technique: meta-learning. We use Reptile, a first-order
optimization-based meta-learning algorithm as our improved training method. We
evaluated our model on the MultiWOZ dataset and outperformed DiKTNet in both
BLEU and Entity F1 scores when the same amount of data is available.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Low Resource Style Transfer via Domain Adaptive Meta Learning [30.323491061441857]
We propose DAML-ATM (Domain Adaptive Meta-Learning with Adversarial Transfer Model), which consists of two parts: DAML and ATM.
DAML is a domain adaptive meta-learning approach to learn general knowledge in multiple heterogeneous source domains, capable of adapting to new unseen domains with a small amount of data.
We also propose a new unsupervised TST approach Adversarial Transfer Model (ATM), composed of a sequence-to-sequence pre-trained language model and uses adversarial style training for better content preservation and style transfer.
arXiv Detail & Related papers (2022-05-25T03:58:24Z) - Semi-supervised Meta-learning with Disentanglement for
Domain-generalised Medical Image Segmentation [15.351113774542839]
Generalising models to new data from new centres (termed here domains) remains a challenge.
We propose a novel semi-supervised meta-learning framework with disentanglement.
We show that the proposed method is robust on different segmentation tasks and achieves state-of-the-art generalisation performance on two public benchmarks.
arXiv Detail & Related papers (2021-06-24T19:50:07Z) - A Student-Teacher Architecture for Dialog Domain Adaptation under the
Meta-Learning Setting [42.80034363734555]
It is essential to develop algorithms that can adapt to different domains efficiently when building data-driven dialog models.
We propose an efficient domain adaptive task-oriented dialog system model, which incorporates a meta-teacher model.
We evaluate our model on two multi-domain datasets, MultiWOZ and Google-Guided Dialogue, and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-04-06T17:31:28Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z) - Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation [77.62366712130196]
We present the winning entry at the fast domain adaptation task of DSTC8, a hybrid generative-retrieval model based on GPT-2 fine-tuned to the multi-domain MetaLWOz dataset.
Our model uses retrieval logic as a fallback, being SoTA on MetaLWOz in human evaluation (>4% improvement over the 2nd place system) and attaining competitive generalization performance in adaptation to the unseen MultiWOZ dataset.
arXiv Detail & Related papers (2020-03-03T18:07:42Z) - A Simple Baseline to Semi-Supervised Domain Adaptation for Machine
Translation [73.3550140511458]
State-of-the-art neural machine translation (NMT) systems are data-hungry and perform poorly on new domains with no supervised data.
We propose a simple but effect approach to the semi-supervised domain adaptation scenario of NMT.
This approach iteratively trains a Transformer-based NMT model via three training objectives: language modeling, back-translation, and supervised translation.
arXiv Detail & Related papers (2020-01-22T16:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.