Tree-Based Reconstructive Partitioning: A Novel Low-Data Level
Generation Approach
- URL: http://arxiv.org/abs/2309.13071v1
- Date: Mon, 18 Sep 2023 20:39:14 GMT
- Title: Tree-Based Reconstructive Partitioning: A Novel Low-Data Level
Generation Approach
- Authors: Emily Halina and Matthew Guzdial
- Abstract summary: Procedural Content Generation (PCG) and Machine Learning (PCGML) have appeared in published games.
Tree-based Reconstructive Partitioning (TRP) is a novel PCGML approach aimed to address this problem.
- Score: 5.626364462708322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural Content Generation (PCG) is the algorithmic generation of content,
often applied to games. PCG and PCG via Machine Learning (PCGML) have appeared
in published games. However, it can prove difficult to apply these approaches
in the early stages of an in-development game. PCG requires expertise in
representing designer notions of quality in rules or functions, and PCGML
typically requires significant training data, which may not be available early
in development. In this paper, we introduce Tree-based Reconstructive
Partitioning (TRP), a novel PCGML approach aimed to address this problem. Our
results, across two domains, demonstrate that TRP produces levels that are more
playable and coherent, and that the approach is more generalizable with less
training data. We consider TRP to be a promising new approach that can afford
the introduction of PCGML into the early stages of game development without
requiring human expertise or significant training data.
Related papers
- Continual Learning for Remote Physiological Measurement: Minimize Forgetting and Simplify Inference [4.913049603343811]
Existing r measurement methods often overlook the incremental learning scenario.
Most existing class incremental learning approaches are unsuitable for r measurement.
We present a novel method named ADDP to tackle continual learning for r measurement.
arXiv Detail & Related papers (2024-07-19T01:49:09Z) - G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning [0.28273304533873334]
In games, graph-based data structures are omnipresent and represent game economies, skill trees or complex, branching quest lines.
We propose a novel and controllable method for the procedural generation of graph data using reinforcement learning.
Our method is capable of generating graph-based content quickly and reliably to support and inspire designers in the game creation process.
arXiv Detail & Related papers (2024-07-15T07:11:00Z) - Procedural Content Generation via Generative Artificial Intelligence [1.437446768735628]
generative artificial intelligence (AI) saw a significant increase in interest in the mid-2010s.
generative AI is effective for PCG, but building high-performance AI requires vast amounts of training data.
For PCG research to advance further, issues related to limited training data must be overcome.
arXiv Detail & Related papers (2024-07-12T06:03:38Z) - Multi-Epoch learning with Data Augmentation for Deep Click-Through Rate Prediction [53.88231294380083]
We introduce a novel Multi-Epoch learning with Data Augmentation (MEDA) framework, suitable for both non-continual and continual learning scenarios.
MEDA minimizes overfitting by reducing the dependency of the embedding layer on subsequent training data.
Our findings confirm that pre-trained layers can adapt to new embedding spaces, enhancing performance without overfitting.
arXiv Detail & Related papers (2024-06-27T04:00:15Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - PCGPT: Procedural Content Generation via Transformers [1.515687944002438]
The paper presents the PCGPT framework, an innovative approach to procedural content generation (PCG) using offline reinforcement learning and transformer networks.
PCGPT utilizes an autoregressive model based on transformers to generate game levels iteratively, addressing the challenges of traditional PCG methods such as repetitive, predictable, or inconsistent content.
arXiv Detail & Related papers (2023-10-03T19:58:02Z) - Procedural Content Generation via Knowledge Transformation (PCG-KT) [8.134009219520289]
We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT)
Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge.
arXiv Detail & Related papers (2023-05-01T03:31:22Z) - On-Device Domain Generalization [93.79736882489982]
Domain generalization is critical to on-device machine learning applications.
We find that knowledge distillation is a strong candidate for solving the problem.
We propose a simple idea called out-of-distribution knowledge distillation (OKD), which aims to teach the student how the teacher handles (synthetic) out-of-distribution data.
arXiv Detail & Related papers (2022-09-15T17:59:31Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.