Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data
- URL: http://arxiv.org/abs/2406.18921v2
- Date: Sat, 29 Jun 2024 05:58:28 GMT
- Title: Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data
- Authors: Yiting Ran, Xintao Wang, Rui Xu, Xinfeng Yuan, Jiaqing Liang, Yanghua Xiao, Deqing Yang,
- Abstract summary: We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
- Score: 58.92110996840019
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia.While existing RPAs well portray the characters' knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations. Code and data are available at \href{https://github.com/alienet1109/RolePersonality}{this URL}.
Related papers
- Prompt Framework for Role-playing: Generation and Evaluation [3.2845546753303867]
Large language models (LLM) have demonstrated remarkable abilities in generating natural language, understanding user instruction, and mimicking human language use.
We introduce a framework that uses prompts to leverage the state-of-the-art (SOTA) LLMs to construct role-playing dialogue datasets and evaluate the role-playing performance.
arXiv Detail & Related papers (2024-06-02T06:09:56Z) - From Persona to Personalization: A Survey on Role-Playing Language Agents [52.783043059715546]
Recent advancements in large language models (LLMs) have boosted the rise of Role-Playing Language Agents (RPLAs)
RPLAs achieve a remarkable sense of human likeness and vivid role-playing performance.
They have catalyzed numerous AI applications, such as emotional companions, interactive video games, personalized assistants and copilots.
arXiv Detail & Related papers (2024-04-28T15:56:41Z) - PSYDIAL: Personality-based Synthetic Dialogue Generation using Large Language Models [4.283022729693451]
We present a novel end-to-end personality-based synthetic dialogue data generation pipeline, specifically designed to elicit responses from large language models via prompting.
We introduce PSYDIAL, the first Korean dialogue dataset focused on personality-based dialogues, curated using our proposed pipeline.
Experimental results indicate that while pre-trained models and those fine-tuned with a chit-chat dataset struggle to generate responses reflecting personality, models trained with PSYDIAL show significant improvements.
arXiv Detail & Related papers (2024-04-01T05:19:34Z) - RoleInteract: Evaluating the Social Interaction of Role-Playing Agents [85.6641890712617]
We introduce the first benchmark designed to evaluate the sociality of role-playing conversational agents at both individual and group levels of social interactions.
The benchmark is constructed from a variety of sources and covers a wide range of 500 characters and over 6,000 question prompts.
We find that agents excelling in individual level does not imply their proficiency in group level.
arXiv Detail & Related papers (2024-03-20T15:38:36Z) - Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment [62.898963074989766]
We introduce Ditto, a self-alignment method for role-play.
This method creates a role-play training set comprising 4,000 characters, surpassing the scale of currently available datasets by tenfold.
We present the first comprehensive cross-supervision alignment experiment in the role-play domain.
arXiv Detail & Related papers (2024-01-23T03:56:22Z) - RoleCraft-GLM: Advancing Personalized Role-Playing in Large Language Models [6.753588449962107]
RoleCraft-GLM is an innovative framework aimed at enhancing personalized role-playing with Large Language Models (LLMs)
We contribute a unique conversational dataset that shifts from conventional celebrity-centric characters to diverse, non-celebrity personas.
Our approach includes meticulous character development, ensuring dialogues are both realistic and emotionally resonant.
arXiv Detail & Related papers (2023-12-17T17:57:50Z) - InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews [57.04431594769461]
This paper introduces a novel perspective to evaluate the personality fidelity of RPAs with psychological scales.
Experiments include various types of RPAs and LLMs, covering 32 distinct characters on 14 widely used psychological scales.
With InCharacter, we show that state-of-the-art RPAs exhibit personalities highly aligned with the human-perceived personalities of the characters, achieving an accuracy up to 80.7%.
arXiv Detail & Related papers (2023-10-27T08:42:18Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.