Towards Robust Personalized Dialogue Generation via Order-Insensitive
Representation Regularization
- URL: http://arxiv.org/abs/2305.12782v1
- Date: Mon, 22 May 2023 07:24:29 GMT
- Title: Towards Robust Personalized Dialogue Generation via Order-Insensitive
Representation Regularization
- Authors: Liang Chen, Hongru Wang, Yang Deng, Wai-Chung Kwan, Zezhong Wang and
Kam-Fai Wong
- Abstract summary: We propose a model-agnostic framework, ORder Insensitive Generation (ORIG), to mitigate the order sensitivity problem.
Experiments on the Persona-Chat dataset justify the effectiveness and superiority of our method.
- Score: 20.722098595079945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating persona consistent dialogue response is important for developing
an intelligent conversational agent. Recent works typically fine-tune
large-scale pre-trained models on this task by concatenating persona texts and
dialogue history as a single input sequence to generate the target response.
While simple and effective, our analysis shows that this popular practice is
seriously affected by order sensitivity where different input orders of persona
sentences significantly impact the quality and consistency of generated
response, resulting in severe performance fluctuations (i.e., 29.4% on GPT2 and
83.2% on BART). To mitigate the order sensitivity problem, we propose a
model-agnostic framework, ORder Insensitive Generation (ORIG), which enables
dialogue models to learn robust representation under different persona orders
and improve the consistency of response generation. Experiments on the
Persona-Chat dataset justify the effectiveness and superiority of our method
with two dominant pre-trained models (GPT2 and BART).
Related papers
- Enhancing Personality Recognition in Dialogue by Data Augmentation and
Heterogeneous Conversational Graph Networks [30.33718960981521]
Personality recognition is useful for enhancing robots' ability to tailor user-adaptive responses.
One of the challenges in this task is a limited number of speakers in existing dialogue corpora.
arXiv Detail & Related papers (2024-01-11T12:27:33Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - WHAT, WHEN, and HOW to Ground: Designing User Persona-Aware
Conversational Agents for Engaging Dialogue [4.328280329592151]
We present a method for building a personalized open-domain dialogue system to address the WWH problem for natural response generation in a commercial setting.
The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets.
Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses.
arXiv Detail & Related papers (2023-06-06T02:28:38Z) - SimOAP: Improve Coherence and Consistency in Persona-based Dialogue
Generation via Over-sampling and Post-evaluation [54.66399120084227]
Language models trained on large-scale corpora can generate remarkably fluent results in open-domain dialogue.
For the persona-based dialogue generation task, consistency and coherence are great challenges for language models.
A two-stage SimOAP strategy is proposed, i.e., over-sampling and post-evaluation.
arXiv Detail & Related papers (2023-05-18T17:23:00Z) - Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - A Model-Agnostic Data Manipulation Method for Persona-based Dialogue
Generation [107.82729587882397]
It is expensive to scale up current persona-based dialogue datasets.
Each data sample in this task is more complex to learn with than conventional dialogue data.
We propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model.
arXiv Detail & Related papers (2022-04-21T03:49:54Z) - Dual Task Framework for Debiasing Persona-grounded Dialogue Dataset [17.403065663306567]
We introduce a data-centric approach for the task of improving persona-conditioned dialogue agents.
Specifically, we augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks.
Experiments on Persona-Chat show that our approach outperforms pre-trained LMs by an 11.7 point gain in terms of accuracy.
arXiv Detail & Related papers (2022-02-11T04:08:46Z) - Bilateral Personalized Dialogue Generation with Dynamic Persona-Aware
Fusion [3.5433229509828155]
We propose a bilateral personalized dialogue generation (BPDG) method with dynamic persona-aware fusion via multi-task transfer learning.
The experimental results show that the proposed method outperforms several state-of-the-art methods in terms of both automatic and manual evaluations.
arXiv Detail & Related papers (2021-06-15T03:21:19Z) - Partner Matters! An Empirical Study on Fusing Personas for Personalized
Response Selection in Retrieval-Based Chatbots [51.091235903442715]
This paper makes an attempt to explore the impact of utilizing personas that describe either self or partner speakers on the task of response selection.
Four persona fusion strategies are designed, which assume personas interact with contexts or responses in different ways.
Empirical studies on the Persona-Chat dataset show that the partner personas can improve the accuracy of response selection.
arXiv Detail & Related papers (2021-05-19T10:32:30Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.