Non-Autoregressive Dialog State Tracking
- URL: http://arxiv.org/abs/2002.08024v1
- Date: Wed, 19 Feb 2020 06:39:26 GMT
- Title: Non-Autoregressive Dialog State Tracking
- Authors: Hung Le, Richard Socher, Steven C.H. Hoi
- Abstract summary: We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
- Score: 122.2328875457225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues
have progressed toward open-vocabulary or generation-based approaches where the
models can generate slot value candidates from the dialogue history itself.
These approaches have shown good performance gain, especially in complicated
dialogue domains with dynamic slot values. However, they fall short in two
aspects: (1) they do not allow models to explicitly learn signals across
domains and slots to detect potential dependencies among (domain, slot) pairs;
and (2) existing models follow auto-regressive approaches which incur high time
cost when the dialogue evolves over multiple domains and multiple turns. In
this paper, we propose a novel framework of Non-Autoregressive Dialog State
Tracking (NADST) which can factor in potential dependencies among domains and
slots to optimize the models towards better prediction of dialogue states as a
complete set rather than separate slots. In particular, the non-autoregressive
nature of our method not only enables decoding in parallel to significantly
reduce the latency of DST for real-time dialogue response generation, but also
detect dependencies among slots at token level in addition to slot and domain
level. Our empirical results show that our model achieves the state-of-the-art
joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency
of our model is an order of magnitude lower than the previous state of the art
as the dialogue history extends over time.
Related papers
- TOD-Flow: Modeling the Structure of Task-Oriented Dialogues [77.15457469745364]
We propose a novel approach focusing on inferring the TOD-Flow graph from dialogue data annotated with dialog acts.
The inferred TOD-Flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability.
arXiv Detail & Related papers (2023-12-07T20:06:23Z) - Dialogue State Distillation Network with Inter-Slot Contrastive Learning
for Dialogue State Tracking [25.722458066685046]
Dialogue State Tracking (DST) aims to extract users' intentions from the dialogue history.
Currently, most existing approaches suffer from error propagation and are unable to dynamically select relevant information.
We propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states.
arXiv Detail & Related papers (2023-02-16T11:05:24Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context
Tuning [7.5700317050237365]
We propose DiSTRICT, a generalizable in-context tuning approach for Dialogue State Tracking (DST)
DSTRICT retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates.
Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings.
arXiv Detail & Related papers (2022-12-06T09:40:15Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z) - Meta Dialogue Policy Learning [58.045067703675095]
We propose Deep Transferable Q-Network (DTQN) to utilize shareable low-level signals between domains.
We decompose the state and action representation space into feature subspaces corresponding to these low-level components.
In experiments, our model outperforms baseline models in terms of both success rate and dialogue efficiency.
arXiv Detail & Related papers (2020-06-03T23:53:06Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z) - Efficient Context and Schema Fusion Networks for Multi-Domain Dialogue
State Tracking [32.36259992245]
Dialogue state tracking (DST) aims at estimating the current dialogue state given all the preceding conversation.
For multi-domain DST, the data sparsity problem is a major obstacle due to increased numbers of state candidates and dialogue lengths.
We utilize the previous dialogue state (predicted) and the current dialogue utterance as the input for DST.
arXiv Detail & Related papers (2020-04-07T13:46:39Z) - Domain-Aware Dialogue State Tracker for Multi-Domain Dialogue Systems [2.3859169601259347]
In task-oriented dialogue systems the dialogue state tracker (DST) component is responsible for predicting the state of the dialogue based on the dialogue history.
We propose a domain-aware dialogue state tracker that is completely data-driven and it is modeled to predict for dynamic service schemas.
arXiv Detail & Related papers (2020-01-21T13:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.