More Robust Schema-Guided Dialogue State Tracking via Tree-Based
Paraphrase Ranking
- URL: http://arxiv.org/abs/2303.09905v1
- Date: Fri, 17 Mar 2023 11:43:08 GMT
- Title: More Robust Schema-Guided Dialogue State Tracking via Tree-Based
Paraphrase Ranking
- Authors: A. Coca, B.H. Tseng, W. Lin, B. Byrne
- Abstract summary: Fine-tuned language models excel at building schema-guided dialogue state tracking (DST)
We propose a framework for generating synthetic schemas which uses tree-based ranking to jointly optimise diversity and semantic faithfulness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The schema-guided paradigm overcomes scalability issues inherent in building
task-oriented dialogue (TOD) agents with static ontologies. Instead of
operating on dialogue context alone, agents have access to hierarchical schemas
containing task-relevant natural language descriptions. Fine-tuned language
models excel at schema-guided dialogue state tracking (DST) but are sensitive
to the writing style of the schemas. We explore methods for improving the
robustness of DST models. We propose a framework for generating synthetic
schemas which uses tree-based ranking to jointly optimise lexical diversity and
semantic faithfulness. The generalisation of strong baselines is improved when
augmenting their training data with prompts generated by our framework, as
demonstrated by marked improvements in average joint goal accuracy (JGA) and
schema sensitivity (SS) on the SGD-X benchmark.
Related papers
- LangSuitE: Planning, Controlling and Interacting with Large Language Models in Embodied Text Environments [70.91258869156353]
We introduce LangSuitE, a versatile and simulation-free testbed featuring 6 representative embodied tasks in textual embodied worlds.
Compared with previous LLM-based testbeds, LangSuitE offers adaptability to diverse environments without multiple simulation engines.
We devise a novel chain-of-thought (CoT) schema, EmMem, which summarizes embodied states w.r.t. history information.
arXiv Detail & Related papers (2024-06-24T03:36:29Z) - Dynamic Syntax Mapping: A New Approach to Unsupervised Syntax Parsing [0.0]
This study investigates the premise that language models, specifically their attention distributions, can encapsulate syntactic dependencies.
We introduce Dynamic Syntax Mapping (DSM), an innovative approach for the induction of these structures.
Our findings reveal that the use of an increasing array of substitutions notably enhances parsing precision on natural language data.
arXiv Detail & Related papers (2023-12-18T10:34:29Z) - TOD-Flow: Modeling the Structure of Task-Oriented Dialogues [77.15457469745364]
We propose a novel approach focusing on inferring the TOD-Flow graph from dialogue data annotated with dialog acts.
The inferred TOD-Flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability.
arXiv Detail & Related papers (2023-12-07T20:06:23Z) - Grounding Description-Driven Dialogue State Trackers with
Knowledge-Seeking Turns [54.56871462068126]
Augmenting the training set with human or synthetic schema paraphrases improves the model robustness to these variations but can be either costly or difficult to control.
We propose to circumvent these issues by grounding the state tracking model in knowledge-seeking turns collected from the dialogue corpus as well as the schema.
arXiv Detail & Related papers (2023-09-23T18:33:02Z) - Span-Selective Linear Attention Transformers for Effective and Robust
Schema-Guided Dialogue State Tracking [7.176787451868171]
We introduce SPLAT, a novel architecture which achieves better generalization and efficiency than prior approaches.
We demonstrate the effectiveness of our model on theGuided Dialogue (SGD) and MultiWOZ datasets.
arXiv Detail & Related papers (2023-06-15T17:59:31Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - Schema-Guided Semantic Accuracy: Faithfulness in Task-Oriented Dialogue
Response Generation [12.165005406799134]
We propose-Guided Semantic Accuracy (SGSAcc) to evaluate utterances generated from both categorical and non-categorical slots.
We show that SGSAcc can be applied to evaluate utterances generated from a wide range of dialogue actions with good agreement with human judgment.
We also identify a previously overlooked weakness in generating faithful utterances from categorical slots in unseen domains.
arXiv Detail & Related papers (2023-01-29T22:32:48Z) - SGD-X: A Benchmark for Robust Generalization in Schema-Guided Dialogue
Systems [26.14268488547028]
We release SGD-X, a benchmark for measuring robustness of dialogue systems to linguistic variations in schemas.
We evaluate two dialogue state tracking models on SGD-X and observe that neither generalizes well across schema variations.
We present a simple model-agnostic data augmentation method to improve schema robustness and zero-shot generalization to unseen services.
arXiv Detail & Related papers (2021-10-13T15:38:29Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - Variational Hierarchical Dialog Autoencoder for Dialog State Tracking
Data Augmentation [59.174903564894954]
In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs.
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs.
Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation.
arXiv Detail & Related papers (2020-01-23T15:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.