Generate, Delete and Rewrite: A Three-Stage Framework for Improving
Persona Consistency of Dialogue Generation
- URL: http://arxiv.org/abs/2004.07672v4
- Date: Thu, 30 Apr 2020 06:53:44 GMT
- Title: Generate, Delete and Rewrite: A Three-Stage Framework for Improving
Persona Consistency of Dialogue Generation
- Authors: Haoyu Song, Yan Wang, Wei-Nan Zhang, Xiaojiang Liu, Ting Liu
- Abstract summary: Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines.
We introduce a three-stage framework that employs a generate-delete-rewrite mechanism to delete inconsistent words from a generated response prototype.
Experiments on the Persona-Chat dataset show that our approach achieves good performance.
- Score: 39.89370224448933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maintaining a consistent personality in conversations is quite natural for
human beings, but is still a non-trivial task for machines. The persona-based
dialogue generation task is thus introduced to tackle the
personality-inconsistent problem by incorporating explicit persona text into
dialogue generation models. Despite the success of existing persona-based
models on generating human-like responses, their one-stage decoding framework
can hardly avoid the generation of inconsistent persona words. In this work, we
introduce a three-stage framework that employs a generate-delete-rewrite
mechanism to delete inconsistent words from a generated response prototype and
further rewrite it to a personality-consistent one. We carry out evaluations by
both human and automatic metrics. Experiments on the Persona-Chat dataset show
that our approach achieves good performance.
Related papers
- Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Persona Extraction Through Semantic Similarity for Emotional Support
Conversation Generation [45.21373213960324]
We propose PESS (Persona Extraction through Semantic Similarity), a novel framework that can automatically infer informative and consistent persona from dialogues.
Our experimental results demonstrate that high-quality persona information inferred by PESS is effective in generating emotionally supportive responses.
arXiv Detail & Related papers (2024-03-07T04:33:11Z) - Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - MCP: Self-supervised Pre-training for Personalized Chatbots with
Multi-level Contrastive Sampling [18.40883902610959]
We propose a self-supervised learning framework for capturing better representations from users' dialogue history for personalized chatbots.
Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history.
Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
arXiv Detail & Related papers (2022-10-17T05:16:23Z) - A Model-Agnostic Data Manipulation Method for Persona-based Dialogue
Generation [107.82729587882397]
It is expensive to scale up current persona-based dialogue datasets.
Each data sample in this task is more complex to learn with than conventional dialogue data.
We propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model.
arXiv Detail & Related papers (2022-04-21T03:49:54Z) - Less is More: Learning to Refine Dialogue History for Personalized
Dialogue Generation [57.73547958927826]
We propose to refine the user dialogue history on a large scale, based on which we can handle more dialogue history and obtain more accurate persona information.
Specifically, we design an MSP model which consists of three personal information refiners and a personalized response generator.
arXiv Detail & Related papers (2022-04-18T02:02:56Z) - Unsupervised Enrichment of Persona-grounded Dialog with Background
Stories [27.52543925693796]
We equip dialog models with 'background stories' related to a persona by leveraging fictional narratives from existing story datasets.
We perform an unsupervised adaptation of a retrieved story for generating a dialog response using a gradient-based rewriting technique.
Our method can generate responses that are more diverse, and are rated more engaging and human-like by human evaluators.
arXiv Detail & Related papers (2021-06-15T18:20:27Z) - Bilateral Personalized Dialogue Generation with Dynamic Persona-Aware
Fusion [3.5433229509828155]
We propose a bilateral personalized dialogue generation (BPDG) method with dynamic persona-aware fusion via multi-task transfer learning.
The experimental results show that the proposed method outperforms several state-of-the-art methods in terms of both automatic and manual evaluations.
arXiv Detail & Related papers (2021-06-15T03:21:19Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.