Path of Destruction: Learning an Iterative Level Generator Using a Small
Dataset
- URL: http://arxiv.org/abs/2202.10184v1
- Date: Mon, 21 Feb 2022 12:51:38 GMT
- Title: Path of Destruction: Learning an Iterative Level Generator Using a Small
Dataset
- Authors: Matthew Siper, Ahmed Khalifa, Julian Togelius
- Abstract summary: We propose a new procedural content generation method which learns iterative level generators from a dataset of existing levels.
The Path of Destruction method views level generation as repair; levels are created by iteratively repairing from a random starting state.
We demonstrate this method by applying it to generate unique and playable tile-based levels for several 2D games.
- Score: 7.110423254122942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new procedural content generation method which learns iterative
level generators from a dataset of existing levels. The Path of Destruction
method, as we call it, views level generation as repair; levels are created by
iteratively repairing from a random starting state. The first step is to
generate an artificial dataset from the original set of levels by introducing
many different sequences of mutations to existing levels. In the generated
dataset, features are observations of destroyed levels and targets are the
specific actions that repair the mutated tile in the middle of the
observations. Using this dataset, a convolutional network is trained to map
from observations to their respective appropriate repair actions. The trained
network is then used to iteratively produce levels from random starting states.
We demonstrate this method by applying it to generate unique and playable
tile-based levels for several 2D games (Zelda, Danger Dave, and Sokoban) and
vary key hyperparameters.
Related papers
- Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - Personalized Federated Learning via Active Sampling [50.456464838807115]
This paper proposes a novel method for sequentially identifying similar (or relevant) data generators.
Our method evaluates the relevance of a data generator by evaluating the effect of a gradient step using its local dataset.
We extend this method to non-parametric models by a suitable generalization of the gradient step to update a hypothesis using the local dataset provided by a data generator.
arXiv Detail & Related papers (2024-09-03T17:12:21Z) - Generative Dataset Distillation: Balancing Global Structure and Local Details [49.20086587208214]
We propose a new dataset distillation method that considers balancing global structure and local details.
Our method involves using a conditional generative adversarial network to generate the distilled dataset.
arXiv Detail & Related papers (2024-04-26T23:46:10Z) - Background Activation Suppression for Weakly Supervised Object
Localization and Semantic Segmentation [84.62067728093358]
Weakly supervised object localization and semantic segmentation aim to localize objects using only image-level labels.
New paradigm has emerged by generating a foreground prediction map to achieve pixel-level localization.
This paper presents two astonishing experimental observations on the object localization learning process.
arXiv Detail & Related papers (2023-09-22T15:44:10Z) - Controllable Path of Destruction [5.791285538179053]
Path of Destruction (PoD) is a self-supervised method for learning iterative generators.
We extend the PoD method to allow designer control over aspects of the generated artifacts.
We test the controllable PoD method in a 2D dungeon setting, as well as in the domain of small 3D Lego cars.
arXiv Detail & Related papers (2023-05-29T18:29:29Z) - Density Map Distillation for Incremental Object Counting [37.982124268097]
A na"ive approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks.
We propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD)
During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks.
arXiv Detail & Related papers (2023-04-11T14:46:21Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Level Generation for Angry Birds with Sequential VAE and Latent Variable
Evolution [25.262831218008202]
We develop a deep-generative-model-based level generation for the game domain of Angry Birds.
Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels.
arXiv Detail & Related papers (2021-04-13T11:23:39Z) - Pretrained equivariant features improve unsupervised landmark discovery [69.02115180674885]
We formulate a two-step unsupervised approach that overcomes this challenge by first learning powerful pixel-based features.
Our method produces state-of-the-art results in several challenging landmark detection datasets.
arXiv Detail & Related papers (2021-04-07T05:42:11Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.