Coreference Augmentation for Multi-Domain Task-Oriented Dialogue State
Tracking
- URL: http://arxiv.org/abs/2106.08723v1
- Date: Wed, 16 Jun 2021 11:47:29 GMT
- Title: Coreference Augmentation for Multi-Domain Task-Oriented Dialogue State
Tracking
- Authors: Ting Han, Chongxuan Huang, Wei Peng
- Abstract summary: We propose Coreference Dialogue State Tracker (CDST) that explicitly models the coreference feature.
Experimental results on MultiWOZ 2.1 dataset show that the proposed model achieves the state-of-the-art joint goal accuracy of 56.47%.
- Score: 3.34618986084988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue State Tracking (DST), which is the process of inferring user goals
by estimating belief states given the dialogue history, plays a critical role
in task-oriented dialogue systems. A coreference phenomenon observed in
multi-turn conversations is not addressed by existing DST models, leading to
sub-optimal performances. In this paper, we propose Coreference Dialogue State
Tracker (CDST) that explicitly models the coreference feature. In particular,
at each turn, the proposed model jointly predicts the coreferred domain-slot
pair and extracts the coreference values from the dialogue context.
Experimental results on MultiWOZ 2.1 dataset show that the proposed model
achieves the state-of-the-art joint goal accuracy of 56.47%.
Related papers
- Common Ground Tracking in Multimodal Dialogue [13.763043173931024]
We present a method for automatically identifying the current set of shared beliefs and questions under discussion'' (QUDs) of a group with a shared goal.
We annotate a dataset of multimodal interactions in a shared physical space with speech transcriptions, prosodic features, gestures, actions, and facets of collaboration.
We cascade into a set of formal closure rules derived from situated evidence and belief axioms and update operations.
arXiv Detail & Related papers (2024-03-26T00:25:01Z) - Dialogue State Distillation Network with Inter-Slot Contrastive Learning
for Dialogue State Tracking [25.722458066685046]
Dialogue State Tracking (DST) aims to extract users' intentions from the dialogue history.
Currently, most existing approaches suffer from error propagation and are unable to dynamically select relevant information.
We propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states.
arXiv Detail & Related papers (2023-02-16T11:05:24Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue
States and Conversations [2.6529642559155944]
We propose the Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue States and Conversations network.
This model extracts information of each dialogue turn by modeling interactions among each turn utterance, the corresponding last dialogue states, and dialogue slots.
arXiv Detail & Related papers (2021-07-12T02:30:30Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z) - Non-Autoregressive Dialog State Tracking [122.2328875457225]
We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
arXiv Detail & Related papers (2020-02-19T06:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.