Ambient Adventures: Teaching ChatGPT on Developing Complex Stories
- URL: http://arxiv.org/abs/2308.01734v1
- Date: Thu, 3 Aug 2023 12:52:49 GMT
- Title: Ambient Adventures: Teaching ChatGPT on Developing Complex Stories
- Authors: Zexin Chen, Eric Zhou, Kenneth Eaton, Xiangyu Peng, Mark Riedl
- Abstract summary: Imaginary play can be seen as taking real objects and locations and using them as imaginary objects and locations in virtual scenarios.
We adopted the story generation capability of large language models (LLMs) to obtain the stories used for imaginary play with human-written prompts.
Those generated stories will be simplified and mapped into action sequences that can guide the agent in imaginary play.
- Score: 8.07595093287034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imaginative play is an area of creativity that could allow robots to engage
with the world around them in a much more personified way. Imaginary play can
be seen as taking real objects and locations and using them as imaginary
objects and locations in virtual scenarios. We adopted the story generation
capability of large language models (LLMs) to obtain the stories used for
imaginary play with human-written prompts. Those generated stories will be
simplified and mapped into action sequences that can guide the agent in
imaginary play. To evaluate whether the agent can successfully finish the
imaginary play, we also designed a text adventure game to simulate a house as
the playground for the agent to interact.
Related papers
- STORY2GAME: Generating (Almost) Everything in an Interactive Fiction Game [15.427907377465685]
We introduce STORY2GAME, a novel approach to generate text-based interactive fiction games.<n>It starts by generating a story, populates the world, and builds the code for actions in a game engine that enables the story to play out interactively.<n>We evaluate the success rate of action code generation with respect to whether a player can interactively play through the entire generated story.
arXiv Detail & Related papers (2025-05-06T14:00:41Z) - BookWorld: From Novels to Interactive Agent Societies for Creative Story Generation [60.53187087043975]
BookWorld is a system for constructing and simulating book-based multi-agent societies.
BookWorld enables diverse applications including story generation, interactive games and social simulation.
arXiv Detail & Related papers (2025-04-20T08:56:27Z) - Towards Enhanced Immersion and Agency for LLM-based Interactive Drama [55.770617779283064]
This paper begins with understanding interactive drama from two aspects: Immersion, the player's feeling of being present in the story, and Agency.
To enhance these two aspects, we first propose Playwriting-guided Generation, a novel method that helps LLMs craft dramatic stories with substantially improved structures and narrative quality.
arXiv Detail & Related papers (2025-02-25T06:06:16Z) - Toyteller: AI-powered Visual Storytelling Through Toy-Playing with Character Symbols [8.676354389016101]
We introduce Toyteller, an AI-powered storytelling system where users generate a mix of story text and visuals by manipulating character symbols like they are toy-playing.
arXiv Detail & Related papers (2025-01-23T00:20:38Z) - Unbounded: A Generative Infinite Game of Character Life Simulation [68.37260000219479]
We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
arXiv Detail & Related papers (2024-10-24T17:59:31Z) - StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning [8.851718319632973]
Large Language Models (LLMs) drive the behavior of virtual characters, allowing plots to emerge from interactions between characters and their environments.
We propose a novel plot creation workflow that mediates between a writer's authorial intent and the emergent behaviors from LLM-driven character simulation.
The process creates "living stories" that dynamically adapt to various game world states, resulting in narratives co-created by the author, character simulation, and player.
arXiv Detail & Related papers (2024-05-17T23:04:51Z) - V-IRL: Grounding Virtual Intelligence in Real Life [65.87750250364411]
V-IRL is a platform that enables agents to interact with the real world in a virtual yet realistic environment.
Our platform serves as a playground for developing agents that can accomplish various practical tasks.
arXiv Detail & Related papers (2024-02-05T18:59:36Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - NarrativePlay: Interactive Narrative Understanding [27.440721435864194]
We introduce NarrativePlay, a novel system that allows users to role-play a fictional character and interact with other characters in narratives in an immersive environment.
We leverage Large Language Models (LLMs) to generate human-like responses, guided by personality traits extracted from narratives.
NarrativePlay has been evaluated on two types of narratives, detective and adventure stories, where users can either explore the world or improve their favorability with the narrative characters through conversations.
arXiv Detail & Related papers (2023-10-02T13:24:00Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Story Shaping: Teaching Agents Human-like Behavior with Stories [9.649246837532417]
We introduce Story Shaping, in which a reinforcement learning agent infers tacit knowledge from an exemplar story of how to accomplish a task.
An intrinsic reward is generated based on the similarity between the agent's inferred world state graph and the inferred story world graph.
We conducted experiments in text-based games requiring commonsense reasoning and shaping the behaviors of agents as virtual game characters.
arXiv Detail & Related papers (2023-01-24T16:19:09Z) - FairyTailor: A Multimodal Generative Framework for Storytelling [33.39639788612019]
We introduce a system and a demo, FairyTailor, for human-in-the-loop visual story co-creation.
Users can create a cohesive children's fairytale by weaving generated texts and retrieved images with their input.
To our knowledge, this is the first dynamic tool for multimodal story generation that allows interactive co-formation of both texts and images.
arXiv Detail & Related papers (2021-07-13T02:45:08Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.