Toward Co-creative Dungeon Generation via Transfer Learning
- URL: http://arxiv.org/abs/2107.12533v1
- Date: Tue, 27 Jul 2021 00:54:55 GMT
- Title: Toward Co-creative Dungeon Generation via Transfer Learning
- Authors: Zisen Zhou and Matthew Guzdial
- Abstract summary: Co-creative Procedural Content Generation via Machine Learning (PCGML) refers to systems where a PCGML agent and a human work together to produce output content.
One of the limitations of co-creative PCGML is that it requires co-creative training data for a PCGML agent to learn to interact with humans.
We propose approximating human-AI interaction data and employing transfer learning to adapt learned co-creative knowledge from one game to a different game.
- Score: 1.590611306750623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Co-creative Procedural Content Generation via Machine Learning (PCGML) refers
to systems where a PCGML agent and a human work together to produce output
content. One of the limitations of co-creative PCGML is that it requires
co-creative training data for a PCGML agent to learn to interact with humans.
However, acquiring this data is a difficult and time-consuming process. In this
work, we propose approximating human-AI interaction data and employing transfer
learning to adapt learned co-creative knowledge from one game to a different
game. We explore this approach for co-creative Zelda dungeon room generation.
Related papers
- Tree-Based Reconstructive Partitioning: A Novel Low-Data Level
Generation Approach [5.626364462708322]
Procedural Content Generation (PCG) and Machine Learning (PCGML) have appeared in published games.
Tree-based Reconstructive Partitioning (TRP) is a novel PCGML approach aimed to address this problem.
arXiv Detail & Related papers (2023-09-18T20:39:14Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Imitating Task and Motion Planning with Visuomotor Transformers [71.41938181838124]
Task and Motion Planning (TAMP) can autonomously generate large-scale datasets of diverse demonstrations.
In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation.
We present a novel imitation learning system called OPTIMUS that trains large-scale visuomotor Transformer policies by imitating a TAMP agent.
arXiv Detail & Related papers (2023-05-25T17:58:14Z) - Procedural Content Generation via Knowledge Transformation (PCG-KT) [8.134009219520289]
We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT)
Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge.
arXiv Detail & Related papers (2023-05-01T03:31:22Z) - Generating Lode Runner Levels by Learning Player Paths with LSTMs [2.199085230546853]
In this paper, we attempt to address problems by learning to generate human-like paths, and then generating levels based on these paths.
We extract player path data from gameplay video, train an LSTM to generate new paths based on this data, and then generate game levels based on this path data.
We demonstrate that our approach leads to more coherent levels for the game Lode Runner in comparison to an existing PCGML approach.
arXiv Detail & Related papers (2021-07-27T00:48:30Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Explainability via Responsibility [0.9645196221785693]
We present an approach to explainable artificial intelligence in which certain training instances are offered to human users.
We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions.
arXiv Detail & Related papers (2020-10-04T20:41:03Z) - Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks [70.56451186797436]
We study how to use meta-reinforcement learning to solve the bulk of the problem in simulation.
We demonstrate our approach by training an agent to successfully perform challenging real-world insertion tasks.
arXiv Detail & Related papers (2020-04-29T18:00:22Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z) - On the interaction between supervision and self-play in emergent
communication [82.290338507106]
We investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency.
We find that first training agents via supervised learning on human data followed by self-play outperforms the converse.
arXiv Detail & Related papers (2020-02-04T02:35:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.