PersonalityChat: Conversation Distillation for Personalized Dialog
Modeling with Facts and Traits
- URL: http://arxiv.org/abs/2401.07363v1
- Date: Sun, 14 Jan 2024 20:35:33 GMT
- Title: PersonalityChat: Conversation Distillation for Personalized Dialog
Modeling with Facts and Traits
- Authors: Ehsan Lotfi, Maxime De Bruyn, Jeska Buhmann, Walter Daelemans
- Abstract summary: PersonalityChat is a synthetic conversational dataset based upon the popular PersonaChat dataset.
We show that the personality trait labels can be used for trait-based personalization of generative dialogue models.
- Score: 5.447308344436046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The new wave of Large Language Models (LLM) has offered an efficient tool to
curate sizeable conversational datasets. So far studies have mainly focused on
task-oriented or generic open-domain dialogs, and have not fully explored the
ability of LLMs in following complicated prompts. In this work, we focus on
personalization, and employ LLMs to curate a dataset which is difficult and
costly to crowd-source: PersonalityChat is a synthetic conversational dataset
based upon the popular PersonaChat dataset, but conditioned on both personas
and (Big-5) personality traits. Evaluating models fine-tuned on this dataset,
we show that the personality trait labels can be used for trait-based
personalization of generative dialogue models. We also perform a head-to-head
comparison between PersonalityChat and PersonaChat, and show that training on
the distilled dataset results in more fluent and coherent dialog agents in the
small-model regime.
Related papers
- Doing Personal LAPS: LLM-Augmented Dialogue Construction for Personalized Multi-Session Conversational Search [9.243535345193711]
Our method uses large language models to guide a single human worker in generating personalized dialogues.
LAPS can collect large-scale, human-written, multi-session, and multi-domain conversations.
Our results show that responses generated explicitly using extracted preferences better match user's actual preferences.
arXiv Detail & Related papers (2024-05-06T13:53:03Z) - PSYDIAL: Personality-based Synthetic Dialogue Generation using Large Language Models [4.283022729693451]
We present a novel end-to-end personality-based synthetic dialogue data generation pipeline, specifically designed to elicit responses from large language models via prompting.
We introduce PSYDIAL, the first Korean dialogue dataset focused on personality-based dialogues, curated using our proposed pipeline.
Experimental results indicate that while pre-trained models and those fine-tuned with a chit-chat dataset struggle to generate responses reflecting personality, models trained with PSYDIAL show significant improvements.
arXiv Detail & Related papers (2024-04-01T05:19:34Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Faithful Persona-based Conversational Dataset Generation with Large
Language Models [10.506653172302222]
High-quality conversational datasets are essential for developing AI models that can communicate with users.
We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations.
We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat.
arXiv Detail & Related papers (2023-12-15T18:23:50Z) - Enhancing Chat Language Models by Scaling High-quality Instructional
Conversations [91.98516412612739]
We first provide a systematically designed, diverse, informative, large-scale dataset of instructional conversations, UltraChat.
Our objective is to capture the breadth of interactions that a human might have with an AI assistant.
We fine-tune a LLaMA model to create a powerful conversational model, UltraLLaMA.
arXiv Detail & Related papers (2023-05-23T16:49:14Z) - Enhancing Personalized Dialogue Generation with Contrastive Latent
Variables: Combining Sparse and Dense Persona [16.90863217077699]
Existing personalized dialogue agents model persona profiles from three resources: sparse or dense persona descriptions and dialogue histories.
We combine the advantages of the three resources to obtain a richer and more accurate persona.
Experimental results on Chinese and English datasets demonstrate our model's superiority in personalization.
arXiv Detail & Related papers (2023-05-19T07:24:27Z) - Weakly Supervised Data Augmentation Through Prompting for Dialogue
Understanding [103.94325597273316]
We present a novel approach that iterates on augmentation quality by applying weakly-supervised filters.
We evaluate our methods on the emotion and act classification tasks in DailyDialog and the intent classification task in Facebook Multilingual Task-Oriented Dialogue.
For DailyDialog specifically, using 10% of the ground truth data we outperform the current state-of-the-art model which uses 100% of the data.
arXiv Detail & Related papers (2022-10-25T17:01:30Z) - DialogZoo: Large-Scale Dialog-Oriented Task Learning [52.18193690394549]
We aim to build a unified foundation model which can solve massive diverse dialogue tasks.
To achieve this goal, we first collect a large-scale well-labeled dialogue dataset from 73 publicly available datasets.
arXiv Detail & Related papers (2022-05-25T11:17:16Z) - A Model-Agnostic Data Manipulation Method for Persona-based Dialogue
Generation [107.82729587882397]
It is expensive to scale up current persona-based dialogue datasets.
Each data sample in this task is more complex to learn with than conventional dialogue data.
We propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model.
arXiv Detail & Related papers (2022-04-21T03:49:54Z) - Dual Task Framework for Debiasing Persona-grounded Dialogue Dataset [17.403065663306567]
We introduce a data-centric approach for the task of improving persona-conditioned dialogue agents.
Specifically, we augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks.
Experiments on Persona-Chat show that our approach outperforms pre-trained LMs by an 11.7 point gain in terms of accuracy.
arXiv Detail & Related papers (2022-02-11T04:08:46Z) - Pchatbot: A Large-Scale Dataset for Personalized Chatbot [49.16746174238548]
We introduce Pchatbot, a large-scale dialogue dataset that contains two subsets collected from Weibo and Judicial forums respectively.
To adapt the raw dataset to dialogue systems, we elaborately normalize the raw dataset via processes such as anonymization.
The scale of Pchatbot is significantly larger than existing Chinese datasets, which might benefit the data-driven models.
arXiv Detail & Related papers (2020-09-28T12:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.