Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog
Systems
- URL: http://arxiv.org/abs/2210.08873v1
- Date: Mon, 17 Oct 2022 09:10:03 GMT
- Title: Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog
Systems
- Authors: Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong
Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu
- Abstract summary: We present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semi-supervised and reinforced TOD systems.
We build a knowledge-grounded dialog model to formulate dialog history and local KB as input and predict the system response.
And we perform semi-supervised pre-training both on the labeled and unlabeled data.
- Score: 25.164042288343683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural approaches greatly improve task-oriented dialogue
(TOD) systems which assist users to accomplish their goals. However, such
systems rely on costly manually labeled dialogs which are not available in
practical scenarios. In this paper, we present our models for Track 2 of the
SereTOD 2022 challenge, which is the first challenge of building
semi-supervised and reinforced TOD systems on a large-scale real-world Chinese
TOD dataset MobileCS. We build a knowledge-grounded dialog model to formulate
dialog history and local KB as input and predict the system response. And we
perform semi-supervised pre-training both on the labeled and unlabeled data.
Our system achieves the first place both in the automatic evaluation and human
interaction, especially with higher BLEU (+7.64) and Success (+13.6\%) than the
second place.
Related papers
- Natural Language Task-Oriented Dialog System 2.0 [2.757798192967912]
Task-oriented dialog (TOD) systems play a crucial role in facilitating efficient interactions between users and machines.
These systems traditionally rely on manually annotated metadata, such as dialog states and policy annotations.
We introduce Natural Language Task Oriented Dialog System (NL-ToD), a novel model that removes the dependency on manually annotated turn-wise data.
arXiv Detail & Related papers (2024-07-21T04:52:38Z) - Enhancing Large Language Model Induced Task-Oriented Dialogue Systems
Through Look-Forward Motivated Goals [76.69419538047813]
ProToD approach anticipates the future dialogue actions and incorporates the goal-oriented reward signal to enhance ToD systems.
We present a novel evaluation method that assesses ToD systems based on goal-driven dialogue simulations.
Empirical experiments conducted on the MultiWoZ 2.1 dataset demonstrate that our model can achieve superior performance using only 10% of the data.
arXiv Detail & Related papers (2023-09-16T10:56:00Z) - Enhancing Performance on Seen and Unseen Dialogue Scenarios using
Retrieval-Augmented End-to-End Task-Oriented System [89.40590076430297]
This work enables the TOD systems with more flexibility through a simple cache.
We train end-to-end TOD models that can refer to and ground on both dialogue history and retrieved information during TOD generation.
Experiments demonstrate the superior performance of our framework, with a notable improvement in non-empty joint goal accuracy by 6.7% compared to strong baselines.
arXiv Detail & Related papers (2023-08-16T06:52:10Z) - Knowledge-Retrieval Task-Oriented Dialog Systems with Semi-Supervision [22.249113574918034]
Most existing task-oriented dialog (TOD) systems track dialog states in terms of slots and values and use them to query a database to get relevant knowledge to generate responses.
In real-life applications, user utterances are noisier, and thus it is more difficult to accurately track dialog states and correctly secure relevant knowledge.
Inspired by such progress, we propose a retrieval-based method to enhance knowledge selection in TOD systems, which outperforms the traditional database query method for real-life dialogs.
arXiv Detail & Related papers (2023-05-22T16:29:20Z) - Zero-Shot Generalizable End-to-End Task-Oriented Dialog System using
Context Summarization and Domain Schema [2.7178968279054936]
State-of-the-art approaches in task-oriented dialog systems formulate the problem as a conditional sequence generation task.
This requires labeled training data for each new domain or task.
We introduce a novel Zero-Shot generalizable end-to-end Task-oriented Dialog system, ZS-ToD.
arXiv Detail & Related papers (2023-03-28T18:56:31Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Quick Starting Dialog Systems with Paraphrase Generation [0.0]
We propose a method to reduce the cost and effort of creating new conversational agents by artificially generating more data from existing examples.
Our proposed approach can kick-start a dialog system with little human effort, and brings its performance to a level satisfactory enough for allowing actual interactions with real end-users.
arXiv Detail & Related papers (2022-04-06T02:35:59Z) - EVA2.0: Investigating Open-Domain Chinese Dialogue Systems with
Large-Scale Pre-Training [73.98154158068134]
EVA2.0 is a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters.
We propose EVA2.0, a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters, and will make our models and codes publicly available.
arXiv Detail & Related papers (2022-03-17T13:33:17Z) - UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented
Dialogues [59.499965460525694]
We propose a unified dialogue system (UniDS) with the two aforementioned skills.
We design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues.
We train UniDS with mixed dialogue data from a pretrained chit-chat dialogue model.
arXiv Detail & Related papers (2021-10-15T11:56:47Z) - MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems [75.43457658815943]
We propose Minimalist Transfer Learning (MinTL) to simplify the system design process of task-oriented dialogue systems.
MinTL is a simple yet effective transfer learning framework, which allows us to plug-and-play pre-trained seq2seq models.
We instantiate our learning framework with two pre-trained backbones: T5 and BART, and evaluate them on MultiWOZ.
arXiv Detail & Related papers (2020-09-25T02:19:13Z) - Hierarchical Context Enhanced Multi-Domain Dialogue System for
Multi-domain Task Completion [17.66372217976539]
This paper describes our submitted solution, Hierarchical Context Enhanced Dialogue System (HCEDS)
The main motivation of our system is to comprehensively explore the potential of hierarchical context for sufficiently understanding complex dialogues.
Results listed in the leaderboard show that our system achieves first place in automatic evaluation and the second place in human evaluation.
arXiv Detail & Related papers (2020-03-03T05:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.