Open Role-Playing with Delta-Engines
- URL: http://arxiv.org/abs/2408.05842v5
- Date: Fri, 07 Mar 2025 04:13:19 GMT
- Title: Open Role-Playing with Delta-Engines
- Authors: Hongqiu Wu, Zekai Xu, Tianyang Xu, Shize Wei, Yan Wang, Jiale Hong, Weiqi Wu, Hai Zhao,
- Abstract summary: We propose a new style of game-play to bridge self-expression and role-playing: emphopen role-playing games (ORPGs)<n>Our vision is that, in the real world, we are individually similar when we are born, but we grow into unique ones as a result of the strongly different choices we make afterward.<n>In an ORPG, we empower players with freedom to decide their own growing curves through natural language inputs, ultimately becoming unique characters.
- Score: 50.86533710515017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Game roles can be reflections of personas from a parallel world. In this paper, we propose a new style of game-play to bridge self-expression and role-playing: \emph{open role-playing games (ORPGs)}, where players are allowed to craft and embody their unique characters in the game world. Our vision is that, in the real world, we are individually similar when we are born, but we grow into unique ones as a result of the strongly different choices we make afterward. Therefore, in an ORPG, we empower players with freedom to decide their own growing curves through natural language inputs, ultimately becoming unique characters. To technically do this, we propose a special engine called Delta-Engine. This engine is not a traditional game engine used for game development, but serves as an in-game module to provide new game-play experiences. A delta-engine consists of two components, a base engine and a neural proxy. The base engine programs the prototype of the character as well as the foundational settings of the game; the neural proxy is an LLM, which realizes the character growth by generating new code snippets on the base engine incrementally. In this paper, we self-develop a specific ORPG based on delta-engines. It is adapted from the popular animated series ``Pok\'emon''. We present our efforts in generating out-of-domain and interesting role data in the development process as well as accessing the performance of a delta-engine. While the empirical results in this work are specific, we aim for them to provide general insights for future games.
Related papers
- Position: Interactive Generative Video as Next-Generation Game Engine [32.7449148483466]
We propose Interactive Generative Video (IGV) as the foundation for Generative Game Engines (GGE)
IGV's unique strengths include unlimited high-quality content synthesis, physics-aware world modeling, user-controlled interactivity, long-term memory capabilities, and causal reasoning.
Our work charts a new course for game development in the AI era, envisioning a future where AI-powered generative systems fundamentally reshape how games are created and experienced.
arXiv Detail & Related papers (2025-03-21T17:59:22Z) - Exploring the Interplay Between Video Generation and World Models in Autonomous Driving: A Survey [61.39993881402787]
World models and video generation are pivotal technologies in the domain of autonomous driving.
This paper investigates the relationship between these two technologies.
By analyzing the interplay between video generation and world models, this survey identifies critical challenges and future research directions.
arXiv Detail & Related papers (2024-11-05T08:58:35Z) - Unbounded: A Generative Infinite Game of Character Life Simulation [68.37260000219479]
We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
arXiv Detail & Related papers (2024-10-24T17:59:31Z) - WorldSimBench: Towards Video Generation Models as World Simulators [79.69709361730865]
We classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench.
WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks.
Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.
arXiv Detail & Related papers (2024-10-23T17:56:11Z) - Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - GAVEL: Generating Games Via Evolution and Language Models [40.896938709468465]
We explore the generation of novel games in the Ludii game description language.
We train a model that intelligently mutates and recombines games and mechanics expressed as code.
A sample of the generated games are available to play online through the Ludii portal.
arXiv Detail & Related papers (2024-07-12T16:08:44Z) - A Text-to-Game Engine for UGC-Based Role-Playing Games [6.5715027492220734]
This paper introduces a novel framework for a text-to-game engine that leverages foundation models to transform simple textual inputs into intricate, multi-modal RPG experiences.
The engine dynamically generates game narratives, integrating text, visuals, and mechanics, while adapting characters, environments, and gameplay in realtime based on player interactions.
arXiv Detail & Related papers (2024-07-11T05:33:19Z) - Collaborative Quest Completion with LLM-driven Non-Player Characters in Minecraft [14.877848057734463]
We design a minigame within Minecraft where a player works with two GPT4-driven NPCs to complete a quest.
On analyzing the game logs and recordings, we find that several patterns of collaborative behavior emerge from the NPCs and the human players.
We believe that this preliminary study and analysis will inform future game developers on how to better exploit these rapidly improving generative AI models for collaborative roles in games.
arXiv Detail & Related papers (2024-07-03T19:11:21Z) - Solving Motion Planning Tasks with a Scalable Generative Model [15.858076912795621]
We present an efficient solution based on generative models which learns the dynamics of the driving scenes.
Our innovative design allows the model to operate in both full-Autoregressive and partial-Autoregressive modes.
We conclude that the proposed generative model may serve as a foundation for a variety of motion planning tasks.
arXiv Detail & Related papers (2024-07-03T03:57:05Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - EgoGen: An Egocentric Synthetic Data Generator [53.32942235801499]
EgoGen is a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks.
At the heart of EgoGen is a novel human motion synthesis model that directly leverages egocentric visual inputs of a virtual human to sense the 3D environment.
We demonstrate EgoGen's efficacy in three tasks: mapping and localization for head-mounted cameras, egocentric camera tracking, and human mesh recovery from egocentric views.
arXiv Detail & Related papers (2024-01-16T18:55:22Z) - Which architecture should be implemented to manage data from the real
world, in an Unreal Engine 5 simulator and in the context of mixed reality? [0.0]
This paper gives a detailed analysis of the issue, at both theoretical and operational level.
The C++ system is reviewed in details as well as the third-party library: pitfalls to be avoided are shown.
The last chapter proposes a generic architecture, useful in large-scale industrial 3D applications.
arXiv Detail & Related papers (2023-05-16T07:51:54Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Steps towards prompt-based creation of virtual worlds [1.2891210250935143]
We show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay.
We conclude by discussing impending challenges of AI-assisted co-creation in VR.
arXiv Detail & Related papers (2022-11-10T21:13:04Z) - GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D
LiDAR Segmentation [60.07812405063708]
3D point cloud semantic segmentation is fundamental for autonomous driving.
Most approaches in the literature neglect an important aspect, i.e., how to deal with domain shift when handling dynamic scenes.
This paper advances the state of the art in this research field.
arXiv Detail & Related papers (2022-07-20T09:06:07Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Out of the Box: Embodied Navigation in the Real World [45.97756658635314]
We show how to transfer knowledge acquired in simulation into the real world.
We deploy our models on a LoCoBot equipped with a single Intel RealSense camera.
Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world.
arXiv Detail & Related papers (2021-05-12T18:00:14Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.