Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment
- URL: http://arxiv.org/abs/2401.12474v1
- Date: Tue, 23 Jan 2024 03:56:22 GMT
- Title: Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment
- Authors: Keming Lu, Bowen Yu, Chang Zhou, Jingren Zhou
- Abstract summary: We introduce Ditto, a self-alignment method for role-play.
This method creates a role-play training set comprising 4,000 characters, surpassing the scale of currently available datasets by tenfold.
We present the first comprehensive cross-supervision alignment experiment in the role-play domain.
- Score: 62.898963074989766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considerable efforts have been invested in augmenting the role-playing
proficiency of open-source large language models (LLMs) by emulating
proprietary counterparts. Nevertheless, we posit that LLMs inherently harbor
role-play capabilities, owing to the extensive knowledge of characters and
potential dialogues ingrained in their vast training corpora. Thus, in this
study, we introduce Ditto, a self-alignment method for role-play. Ditto
capitalizes on character knowledge, encouraging an instruction-following LLM to
simulate role-play dialogues as a variant of reading comprehension. This method
creates a role-play training set comprising 4,000 characters, surpassing the
scale of currently available datasets by tenfold regarding the number of roles.
Subsequently, we fine-tune the LLM using this self-generated dataset to augment
its role-playing capabilities. Upon evaluating our meticulously constructed and
reproducible role-play benchmark and the roleplay subset of MT-Bench, Ditto, in
various parameter scales, consistently maintains a consistent role identity and
provides accurate role-specific knowledge in multi-turn role-play
conversations. Notably, it outperforms all open-source role-play baselines,
showcasing performance levels comparable to advanced proprietary chatbots.
Furthermore, we present the first comprehensive cross-supervision alignment
experiment in the role-play domain, revealing that the intrinsic capabilities
of LLMs confine the knowledge within role-play. Meanwhile, the role-play styles
can be easily acquired with the guidance of smaller models. We open-source
related resources at https://github.com/OFA-Sys/Ditto.
Related papers
- Thinking Before Speaking: A Role-playing Model with Mindset [0.6428333375712125]
Large Language Models (LLMs) are skilled at simulating human behaviors.
These models tend to perform poorly when confronted with knowledge that the assumed role does not possess.
We propose a Thinking Before Speaking (TBS) model in this paper.
arXiv Detail & Related papers (2024-09-14T02:41:48Z) - RNR: Teaching Large Language Models to Follow Roles and Rules [153.6596303205894]
We propose model, an automated data generation pipeline that generates diverse roles and rules from existing IFT instructions.
This data can then be used to train models that follow complex system prompts.
Our framework significantly improves role and rule following capability in large language models.
arXiv Detail & Related papers (2024-09-10T06:07:32Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - Prompt Framework for Role-playing: Generation and Evaluation [3.2845546753303867]
Large language models (LLM) have demonstrated remarkable abilities in generating natural language, understanding user instruction, and mimicking human language use.
We introduce a framework that uses prompts to leverage the state-of-the-art (SOTA) LLMs to construct role-playing dialogue datasets and evaluate the role-playing performance.
arXiv Detail & Related papers (2024-06-02T06:09:56Z) - Character is Destiny: Can Large Language Models Simulate Persona-Driven Decisions in Role-Playing? [59.0123596591807]
We benchmark the ability of Large Language Models in persona-driven decision-making.
We investigate whether LLMs can predict characters' decisions provided with the preceding stories in high-quality novels.
The results demonstrate that state-of-the-art LLMs exhibit promising capabilities in this task, yet there is substantial room for improvement.
arXiv Detail & Related papers (2024-04-18T12:40:59Z) - On the Decision-Making Abilities in Role-Playing using Large Language
Models [6.550638804145713]
Large language models (LLMs) are increasingly utilized for role-playing tasks.
This paper focuses on evaluating the decision-making abilities of LLMs post role-playing.
arXiv Detail & Related papers (2024-02-29T02:22:23Z) - Enhancing Role-playing Systems through Aggressive Queries: Evaluation and Improvement [17.5855800570993]
Large Language Models (LLMs) have propelled dialogue generation into new realms, particularly in the field of role-playing systems (RPSs)
Existing LLM-based RPSs still struggle to align with roles when handling intricate and trapped queries in boundary scenarios.
We design the Modular ORchestrated Trap-setting Interaction SystEm (MORTISE) to benchmark and improve the role-playing LLMs' performance.
arXiv Detail & Related papers (2024-02-16T12:12:05Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models [107.00832724504752]
We introduce RoleLLM, a framework to benchmark, elicit, and enhance role-playing abilities in Large Language Models (LLMs)
By Context-Instruct and RoleGPT, we create RoleBench, the first systematic and fine-grained character-level benchmark dataset for role-playing with 168,093 samples.
arXiv Detail & Related papers (2023-10-01T17:52:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.