Thespian: Multi-Character Text Role-Playing Game Agents
- URL: http://arxiv.org/abs/2308.01872v1
- Date: Thu, 3 Aug 2023 16:53:53 GMT
- Title: Thespian: Multi-Character Text Role-Playing Game Agents
- Authors: Christopher Cui, Xiangyu Peng, Mark Riedl
- Abstract summary: We consider the distinction between characters and actors, where an actor agent has the ability to play multiple characters.
We present a framework we call a thespian agent that can learn to emulate multiple characters along with a soft prompt.
We show that our agent outperforms the state of the art agent framework in multi-character learning and few-shot learning.
- Score: 8.19666118455293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-adventure games and text role-playing games are grand challenges for
reinforcement learning game playing agents. Text role-playing games are
open-ended environments where an agent must faithfully play a particular
character. We consider the distinction between characters and actors, where an
actor agent has the ability to play multiple characters. We present a framework
we call a thespian agent that can learn to emulate multiple characters along
with a soft prompt that can be used to direct it as to which character to play
at any time. We further describe an attention mechanism that allows the agent
to learn new characters that are based on previously learned characters in a
few-shot fashion. We show that our agent outperforms the state of the art agent
framework in multi-character learning and few-shot learning.
Related papers
- IBSEN: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation [10.64793069233322]
We introduce IBSEN, a director-actor coordinate agent framework that generates drama scripts and makes the plot played by agents more controllable.
The director agent writes plot outlines that the user desires to see, instructs the actor agents to role-play their characters, and reschedules the plot when human players participate in the scenario to ensure the plot is progressing towards the objective.
arXiv Detail & Related papers (2024-07-01T08:49:57Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - SimsChat: A Customisable Persona-Driven Role-Playing Agent [29.166067413153353]
Large Language Models (LLMs) possess the capability to understand human instructions and generate high-quality text.
We introduce the Customisable Conversation Agent Framework, which employs LLMs to simulate real-world characters.
We present SimsChat, a freely customisable role-playing agent.
arXiv Detail & Related papers (2024-06-25T22:44:17Z) - Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment [62.898963074989766]
We introduce Ditto, a self-alignment method for role-play.
This method creates a role-play training set comprising 4,000 characters, surpassing the scale of currently available datasets by tenfold.
We present the first comprehensive cross-supervision alignment experiment in the role-play domain.
arXiv Detail & Related papers (2024-01-23T03:56:22Z) - LARP: Language-Agent Role Play for Open-World Games [19.80040627487576]
Language Agent for Role-Playing (LARP) is a cognitive architecture that encompasses memory processing and a decision-making assistant.
The framework refines interactions between users and agents, predefined with unique backgrounds and personalities.
It highlights the diverse uses of language models in a range of areas such as entertainment, education, and various simulation scenarios.
arXiv Detail & Related papers (2023-12-24T10:08:59Z) - Deciphering Digital Detectives: Understanding LLM Behaviors and
Capabilities in Multi-Agent Mystery Games [26.07074182316433]
We introduce the first dataset specifically for Jubensha, including character scripts and game rules.
Our work also presents a unique multi-agent interaction framework using LLMs, allowing AI agents to autonomously engage in this game.
To evaluate the gaming performance of these AI agents, we developed novel methods measuring their mastery of case information and reasoning skills.
arXiv Detail & Related papers (2023-12-01T17:33:57Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - NarrativePlay: Interactive Narrative Understanding [27.440721435864194]
We introduce NarrativePlay, a novel system that allows users to role-play a fictional character and interact with other characters in narratives in an immersive environment.
We leverage Large Language Models (LLMs) to generate human-like responses, guided by personality traits extracted from narratives.
NarrativePlay has been evaluated on two types of narratives, detective and adventure stories, where users can either explore the world or improve their favorability with the narrative characters through conversations.
arXiv Detail & Related papers (2023-10-02T13:24:00Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.