Crafting Customisable Characters with LLMs: Introducing SimsChat, a Persona-Driven Role-Playing Agent Framework
- URL: http://arxiv.org/abs/2406.17962v3
- Date: Fri, 16 Aug 2024 08:48:26 GMT
- Title: Crafting Customisable Characters with LLMs: Introducing SimsChat, a Persona-Driven Role-Playing Agent Framework
- Authors: Bohao Yang, Dong Liu, Chen Tang, Chenghao Xiao, Kun Zhao, Chao Li, Lin Yuan, Guang Yang, Lanxiao Huang, Chenghua Lin,
- Abstract summary: Large Language Models (LLMs) can comprehend human instructions and generate high-quality text.
We introduce the Customisable Conversation Agent Framework, which leverages LLMs to simulate real-world characters.
We present SimsChat, a freely customisable role-playing agent.
- Score: 29.166067413153353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) demonstrate a remarkable ability to comprehend human instructions and generate high-quality text. This capability allows LLMs to function as agents that can emulate human beings at a more sophisticated level, beyond the mere replication of basic human behaviours. However, there is a lack of exploring into leveraging LLMs to craft characters from diverse aspects. In this work, we introduce the Customisable Conversation Agent Framework, which leverages LLMs to simulate real-world characters that can be freely customised according to various user preferences. This adaptable framework is beneficial for the design of customisable characters and role-playing agents aligned with human preferences. We propose the SimsConv dataset, which encompasses 68 different customised characters, 1,360 multi-turn role-playing dialogues, and a total of 13,971 interaction dialogues. The characters are created from several real-world elements, such as career, aspiration, trait, and skill. Building upon these foundations, we present SimsChat, a freely customisable role-playing agent. It incorporates diverse real-world scenes and topic-specific character interaction dialogues, thereby simulating characters' life experiences in various scenarios and topic-specific interactions with specific emotions. Experimental results indicate that our proposed framework achieves desirable performance and provides a valuable guideline for the construction of more accurate human simulacra in the future. Our data and code are publicly available at https://github.com/Bernard-Yang/SimsChat.
Related papers
- Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs [50.0874045899661]
We introduce CharacterBot, a model designed to replicate both the linguistic patterns and distinctive thought processes of a character.
Using Lu Xun as a case study, we propose four training tasks derived from his 17 essay collections.
These include a pre-training task focused on mastering external linguistic structures and knowledge, as well as three fine-tuning tasks.
We evaluate CharacterBot on three tasks for linguistic accuracy and opinion comprehension, demonstrating that it significantly outperforms the baselines on our adapted metrics.
arXiv Detail & Related papers (2025-02-18T16:11:54Z) - CharacterBox: Evaluating the Role-Playing Capabilities of LLMs in Text-Based Virtual Worlds [74.02480671181685]
Role-playing is a crucial capability of Large Language Models (LLMs)
Current evaluation methods fall short of adequately capturing the nuanced character traits and behaviors essential for authentic role-playing.
We propose CharacterBox, a simulation sandbox designed to generate situational fine-grained character behavior trajectories.
arXiv Detail & Related papers (2024-12-07T12:09:35Z) - Orca: Enhancing Role-Playing Abilities of Large Language Models by Integrating Personality Traits [4.092862870428798]
We propose Orca, a framework for data processing and training LLMs of custom characters by integrating personality traits.
Orca comprises four stages: Personality traits inferring, leverage LLMs to infer user's BigFive personality trait reports and scores.
Our experiments demonstrate that our proposed model achieves superior performance on this benchmark.
arXiv Detail & Related papers (2024-11-15T07:35:47Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - RoleCraft-GLM: Advancing Personalized Role-Playing in Large Language Models [6.753588449962107]
RoleCraft-GLM is an innovative framework aimed at enhancing personalized role-playing with Large Language Models (LLMs)
We contribute a unique conversational dataset that shifts from conventional celebrity-centric characters to diverse, non-celebrity personas.
Our approach includes meticulous character development, ensuring dialogues are both realistic and emotionally resonant.
arXiv Detail & Related papers (2023-12-17T17:57:50Z) - CharacterGLM: Customizing Chinese Conversational AI Characters with
Large Language Models [66.4382820107453]
We present CharacterGLM, a series of models built upon ChatGLM, with model sizes ranging from 6B to 66B parameters.
Our CharacterGLM is designed for generating Character-based Dialogues (CharacterDial), which aims to equip a conversational AI system with character customization for satisfying people's inherent social desires and emotional needs.
arXiv Detail & Related papers (2023-11-28T14:49:23Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - Tachikuma: Understading Complex Interactions with Multi-Character and
Novel Objects by Large Language Models [67.20964015591262]
We introduce a benchmark named Tachikuma, comprising a Multiple character and novel Object based interaction Estimation task and a supporting dataset.
The dataset captures log data from real-time communications during gameplay, providing diverse, grounded, and complex interactions for further explorations.
We present a simple prompting baseline and evaluate its performance, demonstrating its effectiveness in enhancing interaction understanding.
arXiv Detail & Related papers (2023-07-24T07:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.