Generating Lode Runner Levels by Learning Player Paths with LSTMs
- URL: http://arxiv.org/abs/2107.12532v1
- Date: Tue, 27 Jul 2021 00:48:30 GMT
- Title: Generating Lode Runner Levels by Learning Player Paths with LSTMs
- Authors: Kynan Sorochan, Jerry Chen, Yakun Yu, and Matthew Guzdial
- Abstract summary: In this paper, we attempt to address problems by learning to generate human-like paths, and then generating levels based on these paths.
We extract player path data from gameplay video, train an LSTM to generate new paths based on this data, and then generate game levels based on this path data.
We demonstrate that our approach leads to more coherent levels for the game Lode Runner in comparison to an existing PCGML approach.
- Score: 2.199085230546853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has been a popular tool in many different fields, including
procedural content generation. However, procedural content generation via
machine learning (PCGML) approaches can struggle with controllability and
coherence. In this paper, we attempt to address these problems by learning to
generate human-like paths, and then generating levels based on these paths. We
extract player path data from gameplay video, train an LSTM to generate new
paths based on this data, and then generate game levels based on this path
data. We demonstrate that our approach leads to more coherent levels for the
game Lode Runner in comparison to an existing PCGML approach.
Related papers
- Online Context Learning for Socially-compliant Navigation [49.609656402450746]
This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online.
Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones.
arXiv Detail & Related papers (2024-06-17T12:59:13Z) - Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis [51.04181562775778]
We present a novel approach to automatically synthesize "wayfinding instructions" for an embodied robot agent.
Our algorithm uses in-context learning to condition an LLM to generate instructions using just a few references.
We implement our approach on multiple simulation platforms including Matterport3D, AI Habitat and ThreeDWorld.
arXiv Detail & Related papers (2024-03-18T05:38:07Z) - Backward Lens: Projecting Language Model Gradients into the Vocabulary
Space [94.85922991881242]
We show that a gradient matrix can be cast as a low-rank linear combination of its forward and backward passes' inputs.
We then develop methods to project these gradients into vocabulary items and explore the mechanics of how new information is stored in the LMs' neurons.
arXiv Detail & Related papers (2024-02-20T09:57:08Z) - Accelerate Multi-Agent Reinforcement Learning in Zero-Sum Games with
Subgame Curriculum Learning [65.36326734799587]
We present a novel subgame curriculum learning framework for zero-sum games.
It adopts an adaptive initial state distribution by resetting agents to some previously visited states.
We derive a subgame selection metric that approximates the squared distance to NE values.
arXiv Detail & Related papers (2023-10-07T13:09:37Z) - Learning Vision-and-Language Navigation from YouTube Videos [89.1919348607439]
Vision-and-language navigation (VLN) requires an embodied agent to navigate in realistic 3D environments using natural language instructions.
There are massive house tour videos on YouTube, providing abundant real navigation experiences and layout information.
We create a large-scale dataset which comprises reasonable path-instruction pairs from house tour videos and pre-training the agent on it.
arXiv Detail & Related papers (2023-07-22T05:26:50Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials [5.791285538179053]
We present a method for automated persona-driven video game tutorial level generation.
We use procedural personas to calculate the behavioral characteristics of levels which are evolved.
Within this work, we show that the generated maps can strongly encourage or discourage different persona-like behaviors.
arXiv Detail & Related papers (2022-04-11T16:01:48Z) - Toward Co-creative Dungeon Generation via Transfer Learning [1.590611306750623]
Co-creative Procedural Content Generation via Machine Learning (PCGML) refers to systems where a PCGML agent and a human work together to produce output content.
One of the limitations of co-creative PCGML is that it requires co-creative training data for a PCGML agent to learn to interact with humans.
We propose approximating human-AI interaction data and employing transfer learning to adapt learned co-creative knowledge from one game to a different game.
arXiv Detail & Related papers (2021-07-27T00:54:55Z) - Ensemble Learning For Mega Man Level Generation [2.6402344419230697]
We investigate the use of ensembles of Markov chains for procedurally generating emphMega Man levels.
We evaluate it on measures of playability and stylistic similarity in comparison to a non-ensemble, existing Markov chain approach.
arXiv Detail & Related papers (2021-07-27T00:16:23Z) - Exploring Level Blending across Platformers via Paths and Affordances [5.019592823495709]
We introduce a new PCGML approach for producing novel game content spanning multiple domains.
We use a new affordance and path vocabulary to encode data from six different platformer games and train variational autoencoders on this data.
arXiv Detail & Related papers (2020-08-22T16:43:25Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.