Unbounded: A Generative Infinite Game of Character Life Simulation
- URL: http://arxiv.org/abs/2410.18975v2
- Date: Wed, 30 Oct 2024 16:10:33 GMT
- Title: Unbounded: A Generative Infinite Game of Character Life Simulation
- Authors: Jialu Li, Yuanzhen Li, Neal Wadhwa, Yael Pritch, David E. Jacobs, Michael Rubinstein, Mohit Bansal, Nataniel Ruiz,
- Abstract summary: We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
- Score: 68.37260000219479
- License:
- Abstract: We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models. Inspired by James P. Carse's distinction between finite and infinite games, we leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models. Specifically, Unbounded draws inspiration from sandbox life simulations and allows you to interact with your autonomous virtual character in a virtual world by feeding, playing with and guiding it - with open-ended mechanics generated by an LLM, some of which can be emergent. In order to develop Unbounded, we propose technical innovations in both the LLM and visual generation domains. Specifically, we present: (1) a specialized, distilled large language model (LLM) that dynamically generates game mechanics, narratives, and character interactions in real-time, and (2) a new dynamic regional image prompt Adapter (IP-Adapter) for vision models that ensures consistent yet flexible visual generation of a character across multiple environments. We evaluate our system through both qualitative and quantitative analysis, showing significant improvements in character life simulation, user instruction following, narrative coherence, and visual consistency for both characters and the environments compared to traditional related approaches.
Related papers
- StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning [8.851718319632973]
Large Language Models (LLMs) drive the behavior of virtual characters, allowing plots to emerge from interactions between characters and their environments.
We propose a novel plot creation workflow that mediates between a writer's authorial intent and the emergent behaviors from LLM-driven character simulation.
The process creates "living stories" that dynamically adapt to various game world states, resulting in narratives co-created by the author, character simulation, and player.
arXiv Detail & Related papers (2024-05-17T23:04:51Z) - Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video [23.484070818399]
Video2Game is a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments.
We show that we can not only produce highly-realistic renderings in real-time, but also build interactive games on top.
arXiv Detail & Related papers (2024-04-15T14:32:32Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - Towards a Holodeck-style Simulation Game [40.044978986425676]
Infinitia uses generative image and language models to reshape all aspects of the setting and NPCs based on a short description from the player.
Infinitia is implemented in the Unity engine with a server-client architecture.
It uses a multiplayer framework to allow humans to be present and interact in the simulation.
arXiv Detail & Related papers (2023-08-22T19:19:19Z) - Beyond Reality: The Pivotal Role of Generative AI in the Metaverse [98.1561456565877]
This paper offers a comprehensive exploration of how generative AI technologies are shaping the Metaverse.
We delve into the applications of text generation models like ChatGPT and GPT-3, which are enhancing conversational interfaces with AI-generated characters.
We also examine the potential of 3D model generation technologies like Point-E and Lumirithmic in creating realistic virtual objects.
arXiv Detail & Related papers (2023-07-28T05:44:20Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.