LUNA: Learning Slot-Turn Alignment for Dialogue State Tracking
- URL: http://arxiv.org/abs/2205.02550v1
- Date: Thu, 5 May 2022 10:18:23 GMT
- Title: LUNA: Learning Slot-Turn Alignment for Dialogue State Tracking
- Authors: Yifan Wang, Jing Zhao, Junwei Bao, Chaoqun Duan, Youzheng Wu, Xiaodong
He
- Abstract summary: Dialogue state tracking (DST) aims to predict the current dialogue state given the dialogue history.
Existing methods generally exploit the utterances of all dialogue turns to assign value for each slot.
We propose LUNA, a sLot-tUrN Alignment enhanced approach.
It first explicitly aligns each slot with its most relevant utterance, then further predicts the corresponding value based on this aligned utterance instead of all dialogue utterances.
- Score: 21.80577241399013
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dialogue state tracking (DST) aims to predict the current dialogue state
given the dialogue history. Existing methods generally exploit the utterances
of all dialogue turns to assign value for each slot. This could lead to
suboptimal results due to the information introduced from irrelevant utterances
in the dialogue history, which may be useless and can even cause confusion. To
address this problem, we propose LUNA, a sLot-tUrN Alignment enhanced approach.
It first explicitly aligns each slot with its most relevant utterance, then
further predicts the corresponding value based on this aligned utterance
instead of all dialogue utterances. Furthermore, we design a slot ranking
auxiliary task to learn the temporal correlation among slots which could
facilitate the alignment. Comprehensive experiments are conducted on
multi-domain task-oriented dialogue datasets, i.e., MultiWOZ 2.0, MultiWOZ 2.1,
and MultiWOZ 2.2. The results show that LUNA achieves new state-of-the-art
results on these datasets.
Related papers
- Enhancing Visual Dialog State Tracking through Iterative Object-Entity Alignment in Multi-Round Conversations [3.784841749866846]
We introduce Multi-round Dialogue State Tracking model (MDST)
MDST captures each round of dialog history, constructing internal dialogue state representations defined as 2-tuples of vision-language representations.
Experimental results on the VisDial v1.0 dataset demonstrate that MDST achieves a new state-of-the-art performance in generative setting.
arXiv Detail & Related papers (2024-08-13T08:36:15Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - Beyond the Granularity: Multi-Perspective Dialogue Collaborative
Selection for Dialogue State Tracking [18.172993687706708]
In dialogue state tracking, dialogue history is a crucial material, and its utilization varies between different models.
We propose DiCoS-DST to dynamically select the relevant dialogue contents corresponding to each slot for state updating.
Our approach achieves new state-of-the-art performance on MultiWOZ 2.1 and MultiWOZ 2.2, and achieves superior performance on multiple mainstream benchmark datasets.
arXiv Detail & Related papers (2022-05-20T10:08:45Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue
States and Conversations [2.6529642559155944]
We propose the Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue States and Conversations network.
This model extracts information of each dialogue turn by modeling interactions among each turn utterance, the corresponding last dialogue states, and dialogue slots.
arXiv Detail & Related papers (2021-07-12T02:30:30Z) - Slot Self-Attentive Dialogue State Tracking [22.187581131353948]
We propose a slot self-attention mechanism that can learn the slot correlations automatically.
We conduct comprehensive experiments on two multi-domain task-oriented dialogue datasets.
arXiv Detail & Related papers (2021-01-22T22:48:51Z) - Point or Generate Dialogue State Tracker [0.0]
We propose the Point-Or-Generate Dialogue State Tracker (POGD)
POGD points out explicitly expressed slot values from the user's utterance, and generates implicitly expressed ones based on slot-specific contexts.
Experiments show that POGD not only obtains state-of-the-art results on both WoZ 2.0 and MultiWoZ 2.0 datasets but also has good generalization on unseen values and new slots.
arXiv Detail & Related papers (2020-08-08T02:15:25Z) - A Contextual Hierarchical Attention Network with Adaptive Objective for
Dialogue State Tracking [63.94927237189888]
We propose to enhance the dialogue state tracking (DST) through employing a contextual hierarchical attention network.
We also propose an adaptive objective to alleviate the slot imbalance problem by dynamically adjusting weights of different slots during training.
Experimental results show that our approach reaches 52.68% and 58.55% joint accuracy on MultiWOZ 2.0 and MultiWOZ 2.1 datasets.
arXiv Detail & Related papers (2020-06-02T12:25:44Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Non-Autoregressive Dialog State Tracking [122.2328875457225]
We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
arXiv Detail & Related papers (2020-02-19T06:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.