Grounding Description-Driven Dialogue State Trackers with
Knowledge-Seeking Turns
- URL: http://arxiv.org/abs/2309.13448v1
- Date: Sat, 23 Sep 2023 18:33:02 GMT
- Title: Grounding Description-Driven Dialogue State Trackers with
Knowledge-Seeking Turns
- Authors: Alexandru Coca, Bo-Hsiang Tseng, Jinghong Chen, Weizhe Lin, Weixuan
Zhang, Tisha Anders, Bill Byrne
- Abstract summary: Augmenting the training set with human or synthetic schema paraphrases improves the model robustness to these variations but can be either costly or difficult to control.
We propose to circumvent these issues by grounding the state tracking model in knowledge-seeking turns collected from the dialogue corpus as well as the schema.
- Score: 54.56871462068126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Schema-guided dialogue state trackers can generalise to new domains without
further training, yet they are sensitive to the writing style of the schemata.
Augmenting the training set with human or synthetic schema paraphrases improves
the model robustness to these variations but can be either costly or difficult
to control. We propose to circumvent these issues by grounding the state
tracking model in knowledge-seeking turns collected from the dialogue corpus as
well as the schema. Including these turns in prompts during finetuning and
inference leads to marked improvements in model robustness, as demonstrated by
large average joint goal accuracy and schema sensitivity improvements on SGD
and SGD-X.
Related papers
- Schema Augmentation for Zero-Shot Domain Adaptation in Dialogue State Tracking [16.67185296899117]
Current large language model approaches for zero-shot domain adaptation rely on prompting to introduce knowledge pertaining to the target domains.
In this work, we devise a novel data augmentation approach, Augmentation, that improves the zero-shot domain adaptation of language models through fine-tuning.
Experiments on MultiWOZ and SpokenWOZ showed that the proposed approach resulted in a substantial improvement over the baseline.
arXiv Detail & Related papers (2024-10-31T18:57:59Z) - More Robust Schema-Guided Dialogue State Tracking via Tree-Based
Paraphrase Ranking [0.0]
Fine-tuned language models excel at building schema-guided dialogue state tracking (DST)
We propose a framework for generating synthetic schemas which uses tree-based ranking to jointly optimise diversity and semantic faithfulness.
arXiv Detail & Related papers (2023-03-17T11:43:08Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - CheckDST: Measuring Real-World Generalization of Dialogue State Tracking
Performance [18.936466253481363]
We design a collection of metrics called CheckDST to test well-known weaknesses with augmented test sets.
We find that span-based classification models are resilient to unseen named entities but not robust to language variety.
Due to their respective weaknesses, neither approach is yet suitable for real-world deployment.
arXiv Detail & Related papers (2021-12-15T18:10:54Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - A Fast and Robust BERT-based Dialogue State Tracker for Schema-Guided
Dialogue Dataset [8.990035371365408]
We introduce FastSGT, a fast and robust BERT-based model for state tracking in goal-oriented dialogue systems.
The proposed model is designed for theGuided Dialogue dataset which contains natural language descriptions.
Our model keeps the efficiency in terms of computational and memory consumption while improving the accuracy significantly.
arXiv Detail & Related papers (2020-08-27T18:51:18Z) - Non-Autoregressive Dialog State Tracking [122.2328875457225]
We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
arXiv Detail & Related papers (2020-02-19T06:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.