Start Small: Training Game Level Generators from Nothing by Learning at
Multiple Sizes
- URL: http://arxiv.org/abs/2209.15052v1
- Date: Thu, 29 Sep 2022 18:52:54 GMT
- Title: Start Small: Training Game Level Generators from Nothing by Learning at
Multiple Sizes
- Authors: Yahia Zakaria, Magda Fayek, Mayada Hadhoud
- Abstract summary: A procedural level generator is a tool that generates levels from noise.
One approach to build generators is using machine learning, but given the training data rarity, multiple methods have been proposed to train generators from nothing.
This paper proposes a novel approach to train generators from nothing by learning at multiple level sizes starting from a small size up to the desired sizes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A procedural level generator is a tool that generates levels from noise. One
approach to build generators is using machine learning, but given the training
data rarity, multiple methods have been proposed to train generators from
nothing. However, level generation tasks tend to have sparse feedback, which is
commonly mitigated using game-specific supplemental rewards. This paper
proposes a novel approach to train generators from nothing by learning at
multiple level sizes starting from a small size up to the desired sizes. This
approach employs the observed phenomenon that feedback is denser at smaller
sizes to avoid supplemental rewards. It also presents the benefit of training
generators to output levels at various sizes. We apply this approach to train
controllable generators using generative flow networks. We also modify
diversity sampling to be compatible with generative flow networks and to expand
the expressive range. The results show that our methods can generate
high-quality diverse levels for Sokoban, Zelda and Danger Dave for a variety of
sizes, after only 3h 29min up to 6h 11min (depending on the game) of training
on a single commodity machine. Also, the results show that our generators can
output levels for sizes that were unavailable during training.
Related papers
- Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized
Language Model Finetuning Using Shared Randomness [86.61582747039053]
Language model training in distributed settings is limited by the communication cost of exchanges.
We extend recent work using shared randomness to perform distributed fine-tuning with low bandwidth.
arXiv Detail & Related papers (2023-06-16T17:59:51Z) - Momentum Adversarial Distillation: Handling Large Distribution Shifts in
Data-Free Knowledge Distillation [65.28708064066764]
We propose a simple yet effective method called Momentum Adversarial Distillation (MAD)
MAD maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student.
Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods.
arXiv Detail & Related papers (2022-09-21T13:53:56Z) - Joint Generator-Ranker Learning for Natural Language Generation [99.16268050116717]
JGR is a novel joint training algorithm that integrates the generator and the ranker in a single framework.
By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly.
arXiv Detail & Related papers (2022-06-28T12:58:30Z) - Learning to Generate Levels by Imitating Evolution [7.110423254122942]
We introduce a new type of iterative level generator using machine learning.
We train a model to imitate the evolutionary process and use the model to generate levels.
This trained model is able to modify noisy levels sequentially to create better levels without the need for a fitness function.
arXiv Detail & Related papers (2022-06-11T10:44:57Z) - Illuminating Diverse Neural Cellular Automata for Level Generation [5.294599496581041]
We present a method of generating a collection of neural cellular automata (NCA) to design video game levels.
Our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria.
We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda.
arXiv Detail & Related papers (2021-09-12T11:17:31Z) - Learning Controllable Content Generators [5.5805433423452895]
We train generators capable of producing controllably diverse output, by making them "goal-aware"
We show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner.
arXiv Detail & Related papers (2021-05-06T22:15:51Z) - Slimmable Generative Adversarial Networks [54.61774365777226]
Generative adversarial networks (GANs) have achieved remarkable progress in recent years, but the continuously growing scale of models makes them challenging to deploy widely in practical applications.
In this paper, we introduce slimmable GANs, which can flexibly switch the width of the generator to accommodate various quality-efficiency trade-offs at runtime.
arXiv Detail & Related papers (2020-12-10T13:35:22Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN [80.17705319689139]
We propose a data-free knowledge amalgamate strategy to craft a well-behaved multi-task student network from multiple single/multi-task teachers.
The proposed method without any training data achieves the surprisingly competitive results, even compared with some full-supervised methods.
arXiv Detail & Related papers (2020-03-20T03:20:52Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.