Multi-Domain Level Generation and Blending with Sketches via
Example-Driven BSP and Variational Autoencoders
- URL: http://arxiv.org/abs/2006.09807v1
- Date: Wed, 17 Jun 2020 12:21:22 GMT
- Title: Multi-Domain Level Generation and Blending with Sketches via
Example-Driven BSP and Variational Autoencoders
- Authors: Sam Snodgrass and Anurag Sarkar
- Abstract summary: We present a PCGML approach for level generation that is able to recombine, adapt, and reuse structural patterns.
We show that our approach is able to blend domains together while retaining structural components.
- Score: 3.5234963231260177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural content generation via machine learning (PCGML) has demonstrated
its usefulness as a content and game creation approach, and has been shown to
be able to support human creativity. An important facet of creativity is
combinational creativity or the recombination, adaptation, and reuse of ideas
and concepts between and across domains. In this paper, we present a PCGML
approach for level generation that is able to recombine, adapt, and reuse
structural patterns from several domains to approximate unseen domains. We
extend prior work involving example-driven Binary Space Partitioning for
recombining and reusing patterns in multiple domains, and incorporate
Variational Autoencoders (VAEs) for generating unseen structures. We evaluate
our approach by blending across $7$ domains and subsets of those domains. We
show that our approach is able to blend domains together while retaining
structural components. Additionally, by using different groups of training
domains our approach is able to generate both 1) levels that reproduce and
capture features of a target domain, and 2) levels that have vastly different
properties from the input domain.
Related papers
- UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation [22.003900281544766]
We propose UniHDA, a framework for generative hybrid domain adaptation with multi-modal references from multiple domains.
Our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
arXiv Detail & Related papers (2024-01-23T09:49:24Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Boosting Binary Masks for Multi-Domain Learning through Affine
Transformations [49.25451497933657]
The goal of multi-domain learning is to produce a single model performing a task in all the domains together.
Recent works showed how we can address this problem by masking the internal weights of a given original conv-net through learned binary variables.
We provide a general formulation of binary mask based models for multi-domain learning by affine transformations of the original network parameters.
arXiv Detail & Related papers (2021-03-25T14:54:37Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.