HAMLET: Hyperadaptive Agent-based Modeling for Live Embodied Theatrics
- URL: http://arxiv.org/abs/2507.15518v1
- Date: Mon, 21 Jul 2025 11:36:39 GMT
- Title: HAMLET: Hyperadaptive Agent-based Modeling for Live Embodied Theatrics
- Authors: Sizhou Chen, Shufan Jiang, Chi Zhang, Xiao-Lei Zhang, Xuelong Li,
- Abstract summary: HAMLET is a multi-agent framework focused on drama creation and online performance.<n>During the online performance, each actor is given an autonomous mind.<n>HamLET can create expressive and coherent theatrical experiences.
- Score: 46.0768581496651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating an immersive and interactive theatrical experience is a long-term goal in the field of interactive narrative. The emergence of large language model (LLM) is providing a new path to achieve this goal. However, existing LLM-based drama generation methods often result in AI agents that lack initiative and cannot interact with the physical environment. Furthermore, these methods typically require detailed user input to drive the drama. These limitations reduce the interactivity and immersion of online real-time performance. To address the above challenges, we propose HAMLET, a multi-agent framework focused on drama creation and online performance. Given a simple topic, the framework generates a narrative blueprint, guiding the subsequent improvisational performance. During the online performance, each actor is given an autonomous mind. This means that actors can make independent decisions based on their own background, goals, and emotional state. In addition to conversations with other actors, their decisions can also change the state of scene props through actions such as opening a letter or picking up a weapon. The change is then broadcast to other related actors, updating what they know and care about, which in turn influences their next action. To evaluate the quality of drama performance, we designed an evaluation method to assess three primary aspects, including character performance, narrative quality, and interaction experience. The experimental evaluation shows that HAMLET can create expressive and coherent theatrical experiences. Our code, dataset and models are available at https://github.com/HAMLET-2025/HAMLET.
Related papers
- OmniCharacter: Towards Immersive Role-Playing Agents with Seamless Speech-Language Personality Interaction [123.89581506075461]
We propose OmniCharacter, a first seamless speech-language personality interaction model to achieve immersive RPAs with low latency.<n> Specifically, OmniCharacter enables agents to consistently exhibit role-specific personality traits and vocal traits throughout the interaction.<n>Our method yields better responses in terms of both content and style compared to existing RPAs and mainstream speech-language models, with a response latency as low as 289ms.
arXiv Detail & Related papers (2025-05-26T17:55:06Z) - MoCha: Towards Movie-Grade Talking Character Synthesis [62.007000023747445]
We introduce Talking Characters, a more realistic task to generate talking character animations directly from speech and text.<n>Unlike talking head, Talking Characters aims at generating the full portrait of one or more characters beyond the facial region.<n>We propose MoCha, the first of its kind to generate talking characters.
arXiv Detail & Related papers (2025-03-30T04:22:09Z) - Towards Enhanced Immersion and Agency for LLM-based Interactive Drama [55.770617779283064]
This paper begins with understanding interactive drama from two aspects: Immersion, the player's feeling of being present in the story, and Agency.<n>To enhance these two aspects, we first propose Playwriting-guided Generation, a novel method that helps LLMs craft dramatic stories with substantially improved structures and narrative quality.
arXiv Detail & Related papers (2025-02-25T06:06:16Z) - INFP: Audio-Driven Interactive Head Generation in Dyadic Conversations [11.101103116878438]
We propose INFP, a novel audio-driven head generation framework for dyadic interaction.<n>INFP comprises a Motion-Based Head Imitation stage and an Audio-Guided Motion Generation stage.<n>To facilitate this line of research, we introduce DyConv, a large scale dataset of rich dyadic conversations collected from the Internet.
arXiv Detail & Related papers (2024-12-05T10:20:34Z) - The Drama Machine: Simulating Character Development with LLM Agents [1.999925939110439]
This paper explores use of multiple large language model (LLM) agents to simulate complex, dynamic characters in dramatic scenarios.
We introduce a drama machine framework that coordinates interactions between LLM agents playing different 'Ego' and 'Superego' psychological roles.
Results suggest this multi-agent approach can produce more nuanced, adaptive narratives that evolve over a sequence of dialogical turns.
arXiv Detail & Related papers (2024-08-03T09:40:26Z) - From Role-Play to Drama-Interaction: An LLM Solution [57.233049222938675]
This paper introduces emphLLM-based interactive drama, which endows traditional drama with an unprecedented immersion.
We define this new artistic genre by 6 essential elements-plot, character, thought, diction, spectacle and interaction.
We propose emphNarrative Chain to offer finer control over the narrative progression during interaction with players; emphAuto-Drama to synthesize drama scripts given arbitrary stories.
arXiv Detail & Related papers (2024-05-23T07:03:56Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.