Diversifying Task-oriented Dialogue Response Generation with Prototype
Guided Paraphrasing
- URL: http://arxiv.org/abs/2008.03391v1
- Date: Fri, 7 Aug 2020 22:25:36 GMT
- Title: Diversifying Task-oriented Dialogue Response Generation with Prototype
Guided Paraphrasing
- Authors: Phillip Lippe, Pengjie Ren, Hinda Haned, Bart Voorn, and Maarten de
Rijke
- Abstract summary: Existing methods for Dialogue Response Generation (DRG) in Task-oriented Dialogue Systems ( TDSs) can be grouped into two categories: template-based and corpus-based.
We propose a prototype-based, paraphrasing neural network, called P2-Net, which aims to enhance quality of the responses in terms of both precision and diversity.
- Score: 52.71007876803418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods for Dialogue Response Generation (DRG) in Task-oriented
Dialogue Systems (TDSs) can be grouped into two categories: template-based and
corpus-based. The former prepare a collection of response templates in advance
and fill the slots with system actions to produce system responses at runtime.
The latter generate system responses token by token by taking system actions
into account. While template-based DRG provides high precision and highly
predictable responses, they usually lack in terms of generating diverse and
natural responses when compared to (neural) corpus-based approaches.
Conversely, while corpus-based DRG methods are able to generate natural
responses, we cannot guarantee their precision or predictability. Moreover, the
diversity of responses produced by today's corpus-based DRG methods is still
limited. We propose to combine the merits of template-based and corpus-based
DRGs by introducing a prototype-based, paraphrasing neural network, called
P2-Net, which aims to enhance quality of the responses in terms of both
precision and diversity. Instead of generating a response from scratch, P2-Net
generates system responses by paraphrasing template-based responses. To
guarantee the precision of responses, P2-Net learns to separate a response into
its semantics, context influence, and paraphrasing noise, and to keep the
semantics unchanged during paraphrasing. To introduce diversity, P2-Net
randomly samples previous conversational utterances as prototypes, from which
the model can then extract speaking style information. We conduct extensive
experiments on the MultiWOZ dataset with both automatic and human evaluations.
The results show that P2-Net achieves a significant improvement in diversity
while preserving the semantics of responses.
Related papers
- UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for
Personalized Dialogue Systems [44.893215129952395]
Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
arXiv Detail & Related papers (2024-01-24T06:50:20Z) - Diverse and Faithful Knowledge-Grounded Dialogue Generation via
Sequential Posterior Inference [82.28542500317445]
We present an end-to-end learning framework, termed Sequential Posterior Inference (SPI), capable of selecting knowledge and generating dialogues.
Unlike other methods, SPI does not require the inference network or assume a simple geometry of the posterior distribution.
arXiv Detail & Related papers (2023-06-01T21:23:13Z) - EM Pre-training for Multi-party Dialogue Response Generation [86.25289241604199]
In multi-party dialogues, the addressee of a response utterance should be specified before it is generated.
We propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels.
arXiv Detail & Related papers (2023-05-21T09:22:41Z) - Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue
Systems [71.33737787564966]
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap'
We propose a reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system.
Our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results.
arXiv Detail & Related papers (2022-11-07T15:59:49Z) - A Template-guided Hybrid Pointer Network for
Knowledge-basedTask-oriented Dialogue Systems [15.654119998970499]
We propose a template-guided hybrid pointer network for the knowledge-based task-oriented dialogue system.
We design a memory pointer network model with a gating mechanism to fully exploit the semantic correlation between the retrieved answers and the ground-truth response.
arXiv Detail & Related papers (2021-06-10T15:49:26Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Multi-Domain Dialogue Acts and Response Co-Generation [34.27525685962274]
We propose a neural co-generation model that generates dialogue acts and responses concurrently.
Our model achieves very favorable improvement over several state-of-the-art models in both automatic and human evaluations.
arXiv Detail & Related papers (2020-04-26T12:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.