Dialogue Summaries as Dialogue States (DS2), Template-Guided
Summarization for Few-shot Dialogue State Tracking
- URL: http://arxiv.org/abs/2203.01552v1
- Date: Thu, 3 Mar 2022 07:54:09 GMT
- Title: Dialogue Summaries as Dialogue States (DS2), Template-Guided
Summarization for Few-shot Dialogue State Tracking
- Authors: Jamin Shin, Hangyeol Yu, Hyeongdon Moon, Andrea Madotto, Juneyoung
Park
- Abstract summary: Few-shot dialogue state tracking (DST) is a realistic solution to this problem.
We propose to reformulate dialogue state tracking as a dialogue summarization problem.
- Score: 16.07100713414678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Annotating task-oriented dialogues is notorious for the expensive and
difficult data collection process. Few-shot dialogue state tracking (DST) is a
realistic solution to this problem. In this paper, we hypothesize that dialogue
summaries are essentially unstructured dialogue states; hence, we propose to
reformulate dialogue state tracking as a dialogue summarization problem. To
elaborate, we train a text-to-text language model with synthetic template-based
dialogue summaries, generated by a set of rules from the dialogue states. Then,
the dialogue states can be recovered by inversely applying the summary
generation rules. We empirically show that our method DS2 outperforms previous
works on few-shot DST in MultiWoZ 2.0 and 2.1, in both cross-domain and
multi-domain settings. Our method also exhibits vast speedup during both
training and inference as it can generate all states at once. Finally, based on
our analysis, we discover that the naturalness of the summary templates plays a
key role for successful training.
Related papers
- Enhancing Visual Dialog State Tracking through Iterative Object-Entity Alignment in Multi-Round Conversations [3.784841749866846]
We introduce Multi-round Dialogue State Tracking model (MDST)
MDST captures each round of dialog history, constructing internal dialogue state representations defined as 2-tuples of vision-language representations.
Experimental results on the VisDial v1.0 dataset demonstrate that MDST achieves a new state-of-the-art performance in generative setting.
arXiv Detail & Related papers (2024-08-13T08:36:15Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data [87.67278915655712]
We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables.
The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions.
arXiv Detail & Related papers (2022-04-28T00:52:16Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z) - Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue
States and Conversations [2.6529642559155944]
We propose the Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue States and Conversations network.
This model extracts information of each dialogue turn by modeling interactions among each turn utterance, the corresponding last dialogue states, and dialogue slots.
arXiv Detail & Related papers (2021-07-12T02:30:30Z) - CREDIT: Coarse-to-Fine Sequence Generation for Dialogue State Tracking [44.38388988238695]
A dialogue state tracker aims to accurately find a compact representation of the current dialogue status.
We employ a structured state representation and cast dialogue state tracking as a sequence generation problem.
Experiments demonstrate our tracker achieves encouraging joint goal accuracy for the five domains in MultiWOZ 2.0 and MultiWOZ 2.1 datasets.
arXiv Detail & Related papers (2020-09-22T10:27:18Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.