Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory
- URL: http://arxiv.org/abs/2004.02214v1
- Date: Sun, 5 Apr 2020 14:36:15 GMT
- Title: Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory
- Authors: Yixuan Su, Yan Wang, Simon Baker, Deng Cai, Xiaojiang Liu, Anna
Korhonen, Nigel Collier
- Abstract summary: We introduce a new prototype-to-style framework to tackle the challenge of stylistic dialogue generation.
The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response.
A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response.
- Score: 65.98002918470543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of a dialog system to express prespecified language style during
conversations has a direct, positive impact on its usability and on user
satisfaction. We introduce a new prototype-to-style (PS) framework to tackle
the challenge of stylistic dialogue generation. The framework uses an
Information Retrieval (IR) system and extracts a response prototype from the
retrieved response. A stylistic response generator then takes the prototype and
the desired language style as model input to obtain a high-quality and
stylistic response. To effectively train the proposed model, we propose a new
style-aware learning objective as well as a de-noising learning strategy.
Results on three benchmark datasets from two languages demonstrate that the
proposed approach significantly outperforms existing baselines in both
in-domain and cross-domain evaluations
Related papers
- SimOAP: Improve Coherence and Consistency in Persona-based Dialogue
Generation via Over-sampling and Post-evaluation [54.66399120084227]
Language models trained on large-scale corpora can generate remarkably fluent results in open-domain dialogue.
For the persona-based dialogue generation task, consistency and coherence are great challenges for language models.
A two-stage SimOAP strategy is proposed, i.e., over-sampling and post-evaluation.
arXiv Detail & Related papers (2023-05-18T17:23:00Z) - ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for
Scene Text Spotting [121.11880210592497]
We argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input.
We propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting.
arXiv Detail & Related papers (2022-11-19T03:50:33Z) - Context Matters in Semantically Controlled Language Generation for
Task-oriented Dialogue Systems [6.1478669848771546]
This work combines information about the dialogue history encoded by pre-trained model with a meaning representation of the current system utterance to realize contextual language generation in task-oriented dialogues.
We utilize the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2.
arXiv Detail & Related papers (2021-11-28T11:48:02Z) - Stylistic Retrieval-based Dialogue System with Unparallel Training Data [19.777894827625275]
We propose a flexible framework that adapts a generic retrieval-based dialogue system to mimic the language style of a specified persona without any parallel data.
Our approach is based on automatic generation of stylized data by learning the usage of jargon, and then rewriting the generic conversations to a stylized one by incorporating the jargon.
arXiv Detail & Related papers (2021-09-12T09:56:24Z) - Language Model as an Annotator: Exploring DialoGPT for Dialogue
Summarization [29.887562761942114]
We show how DialoGPT, a pre-trained model for conversational response generation, can be developed as an unsupervised dialogue annotator.
We apply DialoGPT to label three types of features on two dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non pre-trained models as our summarizes.
arXiv Detail & Related papers (2021-05-26T13:50:13Z) - StyleDGPT: Stylized Response Generation with Pre-trained Language Models [39.526613595499356]
We introduce a KL loss and a style classifier to steer response generation towards the target style in both a word-level and a sentence-level.
Our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.
arXiv Detail & Related papers (2020-10-06T09:29:50Z) - Stylized Dialogue Response Generation Using Stylized Unpaired Texts [63.69880979112312]
This paper proposes a stylized dialogue generation method that can capture stylistic features embedded in unpaired texts.
Our method can produce dialogue responses that are both coherent to the given context and conform to the target style.
arXiv Detail & Related papers (2020-09-27T01:04:06Z) - Controlling Dialogue Generation with Semantic Exemplars [55.460082747572734]
We present an Exemplar-based Dialogue Generation model, EDGE, that uses the semantic frames present in exemplar responses to guide generation.
We show that controlling dialogue generation based on the semantic frames of exemplars, rather than words in the exemplar itself, improves the coherence of generated responses.
arXiv Detail & Related papers (2020-08-20T17:02:37Z) - Modelling Hierarchical Structure between Dialogue Policy and Natural
Language Generator with Option Framework for Task-oriented Dialogue System [49.39150449455407]
HDNO is an option framework for designing latent dialogue acts to avoid designing specific dialogue act representations.
We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA.
arXiv Detail & Related papers (2020-06-11T20:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.