A Simple But Effective Approach to n-shot Task-Oriented Dialogue
Augmentation
- URL: http://arxiv.org/abs/2103.00293v2
- Date: Tue, 2 Mar 2021 19:07:10 GMT
- Title: A Simple But Effective Approach to n-shot Task-Oriented Dialogue
Augmentation
- Authors: Taha Aksu and Nancy F. Chen and Min-Yen Kan and Zhengyuan Liu
- Abstract summary: We introduce a framework that creates synthetic task-oriented dialogues in a fully automatic manner.
Our framework uses the simple idea that each turn-pair in a task-oriented dialogue has a certain function.
We observe significant improvements in the fine-tuning scenarios in several domains.
- Score: 32.43362825854633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The collection and annotation of task-oriented conversational data is a
costly and time-consuming manner. Many augmentation techniques have been
proposed to improve the performance of state-of-the-art (SOTA) systems in new
domains that lack the necessary amount of data for training. However, these
augmentation techniques (e.g. paraphrasing) also require some mediocre amount
of data since they use learning-based approaches. This makes using SOTA systems
in emerging low-resource domains infeasible. We, to tackle this problem,
introduce a framework, that creates synthetic task-oriented dialogues in a
fully automatic manner, which operates with input sizes of as small as a few
dialogues. Our framework uses the simple idea that each turn-pair in a
task-oriented dialogue has a certain function and exploits this idea to mix
them creating new dialogues. We evaluate our framework within a low-resource
setting by integrating it with a SOTA model TRADE in the dialogue state
tracking task and observe significant improvements in the fine-tuning scenarios
in several domains. We conclude that this end-to-end dialogue augmentation
framework can be a crucial tool for natural language understanding performance
in emerging task-oriented dialogue domains.
Related papers
- Unsupervised Extraction of Dialogue Policies from Conversations [3.102576158218633]
We show how Large Language Models can be instrumental in extracting dialogue policies from datasets.
We then propose a novel method for generating dialogue policies utilizing a controllable and interpretable graph-based methodology.
arXiv Detail & Related papers (2024-06-21T14:57:25Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Zero-Shot Generalizable End-to-End Task-Oriented Dialog System using
Context Summarization and Domain Schema [2.7178968279054936]
State-of-the-art approaches in task-oriented dialog systems formulate the problem as a conditional sequence generation task.
This requires labeled training data for each new domain or task.
We introduce a novel Zero-Shot generalizable end-to-end Task-oriented Dialog system, ZS-ToD.
arXiv Detail & Related papers (2023-03-28T18:56:31Z) - DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context
Tuning [7.5700317050237365]
We propose DiSTRICT, a generalizable in-context tuning approach for Dialogue State Tracking (DST)
DSTRICT retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates.
Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings.
arXiv Detail & Related papers (2022-12-06T09:40:15Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Alexa Conversations: An Extensible Data-driven Approach for Building
Task-oriented Dialogue Systems [21.98135285833616]
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation.
We present a new approach for building goal-oriented dialogue systems that is scalable, as well as data efficient.
arXiv Detail & Related papers (2021-04-19T07:09:27Z) - Topic-Oriented Spoken Dialogue Summarization for Customer Service with
Saliency-Aware Topic Modeling [61.67321200994117]
In a customer service system, dialogue summarization can boost service efficiency by creating summaries for long spoken dialogues.
In this work, we focus on topic-oriented dialogue summarization, which generates highly abstractive summaries.
We propose a novel topic-augmented two-stage dialogue summarizer ( TDS) jointly with a saliency-aware neural topic model (SATM) for topic-oriented summarization of customer service dialogues.
arXiv Detail & Related papers (2020-12-14T07:50:25Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z) - Recent Advances and Challenges in Task-oriented Dialog System [63.82055978899631]
Task-oriented dialog systems are attracting more and more attention in academic and industrial communities.
We discuss three critical topics for task-oriented dialog systems: (1) improving data efficiency to facilitate dialog modeling in low-resource settings, (2) modeling multi-turn dynamics for dialog policy learning, and (3) integrating domain knowledge into the dialog model.
arXiv Detail & Related papers (2020-03-17T01:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.