TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
- URL: http://arxiv.org/abs/2312.04668v1
- Date: Thu, 7 Dec 2023 20:06:23 GMT
- Title: TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
- Authors: Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki
Kim, Dongsub Shim, Honglak Lee
- Abstract summary: We propose a novel approach focusing on inferring the TOD-Flow graph from dialogue data annotated with dialog acts.
The inferred TOD-Flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability.
- Score: 77.15457469745364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-Oriented Dialogue (TOD) systems have become crucial components in
interactive artificial intelligence applications. While recent advances have
capitalized on pre-trained language models (PLMs), they exhibit limitations
regarding transparency and controllability. To address these challenges, we
propose a novel approach focusing on inferring the TOD-Flow graph from dialogue
data annotated with dialog acts, uncovering the underlying task structure in
the form of a graph. The inferred TOD-Flow graph can be easily integrated with
any dialogue model to improve its prediction performance, transparency, and
controllability. Our TOD-Flow graph learns what a model can, should, and should
not predict, effectively reducing the search space and providing a rationale
for the model's prediction. We show that the proposed TOD-Flow graph better
resembles human-annotated graphs compared to prior approaches. Furthermore,
when combined with several dialogue policies and end-to-end dialogue models, we
demonstrate that our approach significantly improves dialog act classification
and end-to-end response generation performance in the MultiWOZ and SGD
benchmarks. Code available at: https://github.com/srsohn/TOD-Flow
Related papers
- Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models [16.94819621353007]
SynTOD is a new synthetic data generation approach for developing end-to-end Task-Oriented Dialogue (TOD) systems.
It generates diverse, structured conversations through random walks and response simulation using large language models.
In our experiments, using graph-guided response simulations leads to significant improvements in intent classification, slot filling and response relevance.
arXiv Detail & Related papers (2024-04-23T06:23:34Z) - Turning Flowchart into Dialog: Augmenting Flowchart-grounded
Troubleshooting Dialogs via Synthetic Data Generation [50.06143883455979]
Flowchart-grounded troubleshooting dialogue (FTD) systems follow the instructions of a flowchart to diagnose users' problems in specific domains.
We propose a plan-based synthetic data generation approach that generates diverse synthetic dialog data at scale.
arXiv Detail & Related papers (2023-05-02T11:08:27Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - Conversations Are Not Flat: Modeling the Dynamic Information Flow across
Dialogue Utterances [28.255324166852535]
Open-domain dialogue models can generate acceptable responses according to the historical context.
We propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow.
Code and pre-trained models will be public.
arXiv Detail & Related papers (2021-06-04T03:04:06Z) - Dialogue Discourse-Aware Graph Convolutional Networks for Abstractive
Meeting Summarization [24.646506847760822]
We develop a dialogue discourse-Aware Graph Convolutional Networks (DDA-GCN) for meeting summarization.
We first transform the entire meeting text with dialogue discourse relations into a discourse graph and then use DDA-GCN to encode the semantic representation of the graph.
Finally, we employ a Recurrent Neural Network to generate the summary.
arXiv Detail & Related papers (2020-12-07T07:51:38Z) - Task-Oriented Dialogue as Dataflow Synthesis [158.77123205487334]
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
A dialogue agent maps each user utterance to a program that extends this graph.
We introduce a new dataset, SMCalFlow, featuring complex dialogues about events, weather, places, and people.
arXiv Detail & Related papers (2020-09-24T00:35:26Z) - Dialogue Relation Extraction with Document-level Heterogeneous Graph
Attention Networks [21.409522845011907]
Dialogue relation extraction (DRE) aims to detect the relation between two entities mentioned in a multi-party dialogue.
We present a graph attention network-based method for DRE where a graph contains meaningfully connected speaker, entity, entity-type, and utterance nodes.
We empirically show that this graph-based approach quite effectively captures the relations between different entity pairs in a dialogue as it outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-10T18:51:48Z) - Non-Autoregressive Dialog State Tracking [122.2328875457225]
We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
arXiv Detail & Related papers (2020-02-19T06:39:26Z) - Variational Hierarchical Dialog Autoencoder for Dialog State Tracking
Data Augmentation [59.174903564894954]
In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs.
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs.
Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation.
arXiv Detail & Related papers (2020-01-23T15:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.