Structure Extraction in Task-Oriented Dialogues with Slot Clustering
- URL: http://arxiv.org/abs/2203.00073v1
- Date: Mon, 28 Feb 2022 20:18:12 GMT
- Title: Structure Extraction in Task-Oriented Dialogues with Slot Clustering
- Authors: Liang Qiu, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
- Abstract summary: In task-oriented dialogues, dialogue structure has often been considered as transition graphs among dialogue states.
We propose a simple yet effective approach for structure extraction in task-oriented dialogues.
- Score: 94.27806592467537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting structure information from dialogue data can help us better
understand user and system behaviors. In task-oriented dialogues, dialogue
structure has often been considered as transition graphs among dialogue states.
However, annotating dialogue states manually is expensive and time-consuming.
In this paper, we propose a simple yet effective approach for structure
extraction in task-oriented dialogues. We first detect and cluster possible
slot tokens with a pre-trained model to approximate dialogue ontology for a
target domain. Then we track the status of each identified token group and
derive a state transition structure. Empirical results show that our approach
outperforms unsupervised baseline models by far in dialogue structure
extraction. In addition, we show that data augmentation based on extracted
structures enriches the surface formats of training data and can achieve a
significant performance boost in dialogue response generation.
Related papers
- SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation [55.82577086422923]
We provide a feasible definition of dialogue segmentation points with the help of document-grounded dialogues.
We release a large-scale supervised dataset called SuperDialseg, containing 9,478 dialogues.
We also provide a benchmark including 18 models across five categories for the dialogue segmentation task.
arXiv Detail & Related papers (2023-05-15T06:08:01Z) - CTRLStruct: Dialogue Structure Learning for Open-Domain Response
Generation [38.60073402817218]
Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses.
We present a new framework for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information.
Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models.
arXiv Detail & Related papers (2023-03-02T09:27:11Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - Alexa Conversations: An Extensible Data-driven Approach for Building
Task-oriented Dialogue Systems [21.98135285833616]
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation.
We present a new approach for building goal-oriented dialogue systems that is scalable, as well as data efficient.
arXiv Detail & Related papers (2021-04-19T07:09:27Z) - Structured Attention for Unsupervised Dialogue Structure Induction [110.12561786644122]
We propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion.
Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias.
arXiv Detail & Related papers (2020-09-17T23:07:03Z) - Local Contextual Attention with Hierarchical Structure for Dialogue Act
Recognition [14.81680798372891]
We design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information.
Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism.
arXiv Detail & Related papers (2020-03-12T22:26:11Z) - Variational Hierarchical Dialog Autoencoder for Dialog State Tracking
Data Augmentation [59.174903564894954]
In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs.
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs.
Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation.
arXiv Detail & Related papers (2020-01-23T15:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.