Controllable Path of Destruction
- URL: http://arxiv.org/abs/2305.18553v2
- Date: Wed, 31 May 2023 18:47:41 GMT
- Title: Controllable Path of Destruction
- Authors: Matthew Siper, Sam Earle, Zehua Jiang, Ahmed Khalifa, Julian Togelius
- Abstract summary: Path of Destruction (PoD) is a self-supervised method for learning iterative generators.
We extend the PoD method to allow designer control over aspects of the generated artifacts.
We test the controllable PoD method in a 2D dungeon setting, as well as in the domain of small 3D Lego cars.
- Score: 5.791285538179053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Path of Destruction (PoD) is a self-supervised method for learning iterative
generators. The core idea is to produce a training set by destroying a set of
artifacts, and for each destructive step create a training instance based on
the corresponding repair action. A generator trained on this dataset can then
generate new artifacts by repairing from arbitrary states. The PoD method is
very data-efficient in terms of original training examples and well-suited to
functional artifacts composed of categorical data, such as game levels and
discrete 3D structures. In this paper, we extend the Path of Destruction method
to allow designer control over aspects of the generated artifacts.
Controllability is introduced by adding conditional inputs to the state-action
pairs that make up the repair trajectories. We test the controllable PoD method
in a 2D dungeon setting, as well as in the domain of small 3D Lego cars.
Related papers
- LISO: Lidar-only Self-Supervised 3D Object Detection [25.420879730860936]
We introduce a novel self-supervised method to train SOTA lidar object detection networks.
It works on unlabeled sequences of lidar point clouds only.
It utilizes a SOTA self-supervised lidar scene flow network under the hood to generate, track, and iteratively refine pseudo ground truth.
arXiv Detail & Related papers (2024-03-11T18:02:52Z) - Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building
Model Reconstruction [0.0]
This paper presents a method for extracting rooflines from true orthophotos using line detection for the reconstruction of building models at the LoD2 level.
The method is superior to existing plane detection-based methods and state-of-the-art deep learning methods in terms of the accuracy and completeness of the reconstructed building.
arXiv Detail & Related papers (2023-10-02T10:23:08Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans [20.030706182672144]
We propose an unsupervised method for parsing large 3D scans of real-world scenes with easily-interpretable shapes.
Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned 3D shapes.
We demonstrate the usefulness of our model on a novel dataset of seven large aerial LiDAR scans from diverse real-world scenarios.
arXiv Detail & Related papers (2023-04-19T14:49:31Z) - Weakly Supervised Monocular 3D Object Detection using Multi-View
Projection and Direction Consistency [78.76508318592552]
Monocular 3D object detection has become a mainstream approach in automatic driving for its easy application.
Most current methods still rely on 3D point cloud data for labeling the ground truths used in the training phase.
We propose a new weakly supervised monocular 3D objection detection method, which can train the model with only 2D labels marked on images.
arXiv Detail & Related papers (2023-03-15T15:14:00Z) - Unsupervised Learning of Efficient Geometry-Aware Neural Articulated
Representations [89.1388369229542]
We propose an unsupervised method for 3D geometry-aware representation learning of articulated objects.
We obviate this need by learning the representations with GAN training.
Experiments demonstrate the efficiency of our method and show that GAN-based training enables learning of controllable 3D representations without supervision.
arXiv Detail & Related papers (2022-04-19T12:10:18Z) - Path of Destruction: Learning an Iterative Level Generator Using a Small
Dataset [7.110423254122942]
We propose a new procedural content generation method which learns iterative level generators from a dataset of existing levels.
The Path of Destruction method views level generation as repair; levels are created by iteratively repairing from a random starting state.
We demonstrate this method by applying it to generate unique and playable tile-based levels for several 2D games.
arXiv Detail & Related papers (2022-02-21T12:51:38Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.