SYNERGY: Building Task Bots at Scale Using Symbolic Knowledge and
Machine Teaching
- URL: http://arxiv.org/abs/2110.11514v1
- Date: Thu, 21 Oct 2021 23:13:04 GMT
- Title: SYNERGY: Building Task Bots at Scale Using Symbolic Knowledge and
Machine Teaching
- Authors: Baolin Peng, Chunyuan Li, Zhu Zhang, Jinchao Li, Chenguang Zhu,
Jianfeng Gao
- Abstract summary: SYNERGY is a hybrid learning framework where a task bot is developed in two steps.
A pre-trained neural dialog model, SOLOIST, is fine-tuned on the simulated dialogs to build a bot for the task.
The fine-tuned neural dialog model is continually refined with a handful of real task-specific dialogs via machine teaching.
- Score: 75.87418236410296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we explore the use of symbolic knowledge and machine teaching
to reduce human data labeling efforts in building neural task bots. We propose
SYNERGY, a hybrid learning framework where a task bot is developed in two
steps: (i) Symbolic knowledge to neural networks: Large amounts of simulated
dialog sessions are generated based on task-specific symbolic knowledge which
is represented as a task schema consisting of dialog flows and task-oriented
databases. Then a pre-trained neural dialog model, SOLOIST, is fine-tuned on
the simulated dialogs to build a bot for the task. (ii) Neural learning: The
fine-tuned neural dialog model is continually refined with a handful of real
task-specific dialogs via machine teaching, where training samples are
generated by human teachers interacting with the task bot. We validate SYNERGY
on four dialog tasks. Experimental results show that SYNERGY maps task-specific
knowledge into neural dialog models achieving greater diversity and coverage of
dialog flows, and continually improves model performance with machine teaching,
thus demonstrating strong synergistic effects of symbolic knowledge and machine
teaching.
Related papers
- KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - Every time I fire a conversational designer, the performance of the
dialog system goes down [0.07696728525672149]
We investigate how the use of explicit domain knowledge of conversational designers affects the performance of neural-based dialogue systems.
We propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where explicit knowledge is coded in semi-logical rules.
arXiv Detail & Related papers (2021-09-27T13:05:31Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - On Task-Level Dialogue Composition of Generative Transformer Model [9.751234480029765]
We study the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models.
To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss.
arXiv Detail & Related papers (2020-10-09T22:10:03Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.