Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning
- URL: http://arxiv.org/abs/2506.01748v1
- Date: Mon, 02 Jun 2025 14:55:04 GMT
- Title: Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning
- Authors: Yihong Tang, Kehai Chen, Muyun Yang, Zhengyu Niu, Jing Li, Tiejun Zhao, Min Zhang,
- Abstract summary: This paper introduces a novel Role-Aware Reasoning (RAR) method, which consists of two important stages: Role Identity Activation (RIA) and Reasoning Style Optimization (RSO)<n>RIA explicitly guides the model with character profiles during reasoning to counteract attention diversion, and then RSO aligns reasoning style with the character and scene via LRM distillation to mitigate style drift.
- Score: 46.47940531288568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancement of Large Language Models (LLMs) has spurred significant interest in Role-Playing Agents (RPAs) for applications such as emotional companionship and virtual interaction. However, recent RPAs are often built on explicit dialogue data, lacking deep, human-like internal thought processes, resulting in superficial knowledge and style expression. While Large Reasoning Models (LRMs) can be employed to simulate character thought, their direct application is hindered by attention diversion (i.e., RPAs forget their role) and style drift (i.e., overly formal and rigid reasoning rather than character-consistent reasoning). To address these challenges, this paper introduces a novel Role-Aware Reasoning (RAR) method, which consists of two important stages: Role Identity Activation (RIA) and Reasoning Style Optimization (RSO). RIA explicitly guides the model with character profiles during reasoning to counteract attention diversion, and then RSO aligns reasoning style with the character and scene via LRM distillation to mitigate style drift. Extensive experiments demonstrate that the proposed RAR significantly enhances the performance of RPAs by effectively addressing attention diversion and style drift.
Related papers
- SpeechRole: A Large-Scale Dataset and Benchmark for Evaluating Speech Role-Playing Agents [52.29009595100625]
Role-playing agents have emerged as a promising paradigm for achieving personalized interaction and emotional resonance.<n>Existing research primarily focuses on the textual modality, neglecting the critical dimension of speech in realistic interactive scenarios.<n>We construct SpeechRole-Data, a large-scale, high-quality dataset that comprises 98 diverse roles and 112k speech-based single-turn and multi-turn conversations.
arXiv Detail & Related papers (2025-08-04T03:18:36Z) - CogDual: Enhancing Dual Cognition of LLMs via Reinforcement Learning with Implicit Rule-Based Rewards [53.36917093757101]
Role-Playing Language Agents (RPLAs) have emerged as a significant application direction for Large Language Models (LLMs)<n>We introduce textbfCogDual, a novel RPLA adopting a textitcognize-then-respond reasoning paradigm.<n>By jointly modeling external situational awareness and internal self-awareness, CogDual generates responses with improved character consistency and contextual alignment.
arXiv Detail & Related papers (2025-07-23T02:26:33Z) - ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents [60.325553329946]
Role-Playing Language Agents (RPLAs) aim to simulate characters for realistic and engaging human-computer interactions.<n>We propose ChARM, a Character-based Act-adaptive Reward Model.<n>We introduce RoleplayPref, the first large-scale preference dataset specifically for RPLAs.
arXiv Detail & Related papers (2025-05-29T18:15:18Z) - RoleRAG: Enhancing LLM Role-Playing via Graph Guided Retrieval [6.636092764694501]
RoleRAG is a retrieval-based framework that integrates efficient entity disambiguation for knowledge indexing with a boundary-aware retriever for extracting contextually appropriate information from a structured knowledge graph.<n> Experiments on role-playing benchmarks show that RoleRAG's calibrated retrieval helps both general-purpose and role-specific LLMs better align with character knowledge and reduce hallucinated responses.
arXiv Detail & Related papers (2025-05-24T06:11:17Z) - Guess What I am Thinking: A Benchmark for Inner Thought Reasoning of Role-Playing Language Agents [48.52216655094884]
Internal thinking processes of role-playing language agents (RPLAs) remain unexplored.<n>We introduce ROLETHINK, a novel benchmark constructed from literature for evaluating character thought generation.<n>We propose MIRROR, a chain-of-thought approach that generates character thoughts by retrieving memories, predicting character reactions, and synthesizing motivations.
arXiv Detail & Related papers (2025-03-11T08:57:07Z) - A Multi-Task Role-Playing Agent Capable of Imitating Character Linguistic Styles [28.237927070779925]
Current Role-Playing Agents (RPAs) predominantly focus on mimicking a character's fundamental attributes while neglecting the replication of linguistic style.
We develop StyleRPA, a Multi-Task Role-Playing Agent (MRPA) that significantly outperforms recent open-source LLMs and RPAs baselines on 7 tasks including Dialogue, Dictionary, Composition, Story Generation, Product Description, Music Commentary, and Open Question Answering.
arXiv Detail & Related papers (2024-11-04T02:26:27Z) - ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning [17.5855800570993]
Role-playing is an emerging application in the field of Human-Computer Interaction (HCI)
Despite significant progress, role-playing agents (RPLAs) still struggle with maintaining role-consistency across conversations.
We present ERABAL, a framework aimed at enhancing RPLAs' role-playing capabilities through boundary-aware learning.
arXiv Detail & Related papers (2024-09-23T05:12:13Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.