Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue State
Tracking
- URL: http://arxiv.org/abs/2105.04222v1
- Date: Mon, 10 May 2021 09:34:01 GMT
- Title: Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue State
Tracking
- Authors: Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou,
Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
- Abstract summary: Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.
We propose a slot description enhanced generative approach for zero-shot cross-domain DST.
- Score: 50.04597636485369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot cross-domain dialogue state tracking (DST) enables us to handle
task-oriented dialogue in unseen domains without the expense of collecting
in-domain data. In this paper, we propose a slot description enhanced
generative approach for zero-shot cross-domain DST. Specifically, our model
first encodes dialogue context and slots with a pre-trained self-attentive
encoder, and generates slot values in an auto-regressive manner. In addition,
we incorporate Slot Type Informed Descriptions that capture the shared
information across slots to facilitate cross-domain knowledge transfer.
Experimental results on the MultiWOZ dataset show that our proposed method
significantly improves existing state-of-the-art results in the zero-shot
cross-domain setting.
Related papers
- Transforming Slot Schema Induction with Generative Dialogue State Inference [14.06505399101404]
Slot Induction (SSI) aims to automatically induce slots from unlabeled dialogue data.
Our SSI method discovers high-quality candidate information for representing dialogue state.
Experimental comparisons on the MultiWOZ and SGD datasets demonstrate that Generative Dialogue State Inference (GenDSI) outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2024-08-03T02:41:10Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - UNO-DST: Leveraging Unlabelled Data in Zero-Shot Dialogue State Tracking [54.51316566989655]
Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, ignoring unlabelled data in the target domain.
We transform zero-shot DST into few-shot DST by utilising such unlabelled data via joint and self-training methods.
We demonstrate this method's effectiveness on general language models in zero-shot scenarios, improving average joint goal accuracy by 8% across all domains in MultiWOZ.
arXiv Detail & Related papers (2023-10-16T15:16:16Z) - HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework
for Cross-Domain Zero-Shot Slot Filling [4.1940152307593515]
Cross-domain zero-shot slot filling plays a vital role in leveraging source domain knowledge to learn a model.
Existing state-of-the-art zero-shot slot filling methods have limited generalization ability in target domain.
We present a novel Hierarchical Contrastive Learning Framework (HiCL) for zero-shot slot filling.
arXiv Detail & Related papers (2023-10-13T14:23:33Z) - RestNet: Boosting Cross-Domain Few-Shot Segmentation with Residual
Transformation Network [4.232614032390374]
Cross-domain few-shot segmentation (CD-FSS) aims to achieve semantic segmentation in previously unseen domains with a limited number of annotated samples.
We propose a novel residual transformation network (RestNet) that facilitates knowledge transfer while retaining the intra-domain support-Query feature information.
arXiv Detail & Related papers (2023-08-25T16:13:22Z) - DORIC : Domain Robust Fine-Tuning for Open Intent Clustering through
Dependency Parsing [14.709084509818474]
DSTC11-Track2 aims to provide a benchmark for zero-shot, cross-domain, intent-set induction.
We leveraged a multi-domain dialogue dataset to fine-tune the language model and proposed extracting Verb-Object pairs.
Our approach achieved 3rd place in the precision score and showed superior accuracy and normalized mutual information (NMI) score than the baseline model.
arXiv Detail & Related papers (2023-03-17T08:12:36Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Zero-Shot Dialogue State Tracking via Cross-Task Transfer [69.70718906395182]
We propose to transfer the textitcross-task knowledge from general question answering (QA) corpora for the zero-shot dialogue state tracking task.
Specifically, we propose TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA.
In addition, we introduce two effective ways to construct unanswerable questions, namely, negative question sampling and context truncation.
arXiv Detail & Related papers (2021-09-10T03:57:56Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.