Exploring Level Blending across Platformers via Paths and Affordances
- URL: http://arxiv.org/abs/2009.06356v1
- Date: Sat, 22 Aug 2020 16:43:25 GMT
- Title: Exploring Level Blending across Platformers via Paths and Affordances
- Authors: Anurag Sarkar, Adam Summerville, Sam Snodgrass, Gerard Bentley, Joseph
Osborn
- Abstract summary: We introduce a new PCGML approach for producing novel game content spanning multiple domains.
We use a new affordance and path vocabulary to encode data from six different platformer games and train variational autoencoders on this data.
- Score: 5.019592823495709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Techniques for procedural content generation via machine learning (PCGML)
have been shown to be useful for generating novel game content. While used
primarily for producing new content in the style of the game domain used for
training, recent works have increasingly started to explore methods for
discovering and generating content in novel domains via techniques such as
level blending and domain transfer. In this paper, we build on these works and
introduce a new PCGML approach for producing novel game content spanning
multiple domains. We use a new affordance and path vocabulary to encode data
from six different platformer games and train variational autoencoders on this
data, enabling us to capture the latent level space spanning all the domains
and generate new content with varying proportions of the different domains.
Related papers
- Procedural Content Generation in Games: A Survey with Insights on Emerging LLM Integration [1.03590082373586]
Procedural Content Generation (PCG) is defined as the automatic creation of game content using algorithms.
It can increase player engagement and ease the work of game designers.
Recent advances in deep learning approaches in PCG have enabled researchers and practitioners to create more sophisticated content.
It is the arrival of Large Language Models (LLMs) that truly disrupted the trajectory of PCG advancement.
arXiv Detail & Related papers (2024-10-21T05:10:13Z) - Procedural Content Generation via Knowledge Transformation (PCG-KT) [8.134009219520289]
We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT)
Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge.
arXiv Detail & Related papers (2023-05-01T03:31:22Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - mDALU: Multi-Source Domain Adaptation and Label Unification with Partial
Datasets [102.62639692656458]
This paper treats this task as a multi-source domain adaptation and label unification problem.
Our method consists of a partially-supervised adaptation stage and a fully-supervised adaptation stage.
We verify the method on three different tasks, image classification, 2D semantic image segmentation, and joint 2D-3D semantic segmentation.
arXiv Detail & Related papers (2020-12-15T15:58:03Z) - Multi-Domain Level Generation and Blending with Sketches via
Example-Driven BSP and Variational Autoencoders [3.5234963231260177]
We present a PCGML approach for level generation that is able to recombine, adapt, and reuse structural patterns.
We show that our approach is able to blend domains together while retaining structural components.
arXiv Detail & Related papers (2020-06-17T12:21:22Z) - Capturing Local and Global Patterns in Procedural Content Generation via
Machine Learning [9.697217570243845]
Recent procedural content generation via machine learning (PCGML) methods allow learning to produce similar content from existing content.
It is an open questions how well these approaches can capture large-scale visual patterns such as symmetry.
In this paper, we propose to match-three games as a domain to test PCGML algorithms regarding their ability to generate suitable patterns.
arXiv Detail & Related papers (2020-05-26T08:58:37Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.