Procedural Content Generation via Knowledge Transformation (PCG-KT)
- URL: http://arxiv.org/abs/2305.00644v1
- Date: Mon, 1 May 2023 03:31:22 GMT
- Title: Procedural Content Generation via Knowledge Transformation (PCG-KT)
- Authors: Anurag Sarkar, Matthew Guzdial, Sam Snodgrass, Adam Summerville, Tiago
Machado and Gillian Smith
- Abstract summary: We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT)
Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge.
- Score: 8.134009219520289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the concept of Procedural Content Generation via Knowledge
Transformation (PCG-KT), a new lens and framework for characterizing PCG
methods and approaches in which content generation is enabled by the process of
knowledge transformation -- transforming knowledge derived from one domain in
order to apply it in another. Our work is motivated by a substantial number of
recent PCG works that focus on generating novel content via repurposing derived
knowledge. Such works have involved, for example, performing transfer learning
on models trained on one game's content to adapt to another game's content, as
well as recombining different generative distributions to blend the content of
two or more games. Such approaches arose in part due to limitations in PCG via
Machine Learning (PCGML) such as producing generative models for games lacking
training data and generating content for entirely new games. In this paper, we
categorize such approaches under this new lens of PCG-KT by offering a
definition and framework for describing such methods and surveying existing
works using this framework. Finally, we conclude by highlighting open problems
and directions for future research in this area.
Related papers
- Procedural Content Generation in Games: A Survey with Insights on Emerging LLM Integration [1.03590082373586]
Procedural Content Generation (PCG) is defined as the automatic creation of game content using algorithms.
It can increase player engagement and ease the work of game designers.
Recent advances in deep learning approaches in PCG have enabled researchers and practitioners to create more sophisticated content.
It is the arrival of Large Language Models (LLMs) that truly disrupted the trajectory of PCG advancement.
arXiv Detail & Related papers (2024-10-21T05:10:13Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - PCGPT: Procedural Content Generation via Transformers [1.515687944002438]
The paper presents the PCGPT framework, an innovative approach to procedural content generation (PCG) using offline reinforcement learning and transformer networks.
PCGPT utilizes an autoregressive model based on transformers to generate game levels iteratively, addressing the challenges of traditional PCG methods such as repetitive, predictable, or inconsistent content.
arXiv Detail & Related papers (2023-10-03T19:58:02Z) - Tree-Based Reconstructive Partitioning: A Novel Low-Data Level
Generation Approach [5.626364462708322]
Procedural Content Generation (PCG) and Machine Learning (PCGML) have appeared in published games.
Tree-based Reconstructive Partitioning (TRP) is a novel PCGML approach aimed to address this problem.
arXiv Detail & Related papers (2023-09-18T20:39:14Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Exploring Level Blending across Platformers via Paths and Affordances [5.019592823495709]
We introduce a new PCGML approach for producing novel game content spanning multiple domains.
We use a new affordance and path vocabulary to encode data from six different platformer games and train variational autoencoders on this data.
arXiv Detail & Related papers (2020-08-22T16:43:25Z) - Capturing Local and Global Patterns in Procedural Content Generation via
Machine Learning [9.697217570243845]
Recent procedural content generation via machine learning (PCGML) methods allow learning to produce similar content from existing content.
It is an open questions how well these approaches can capture large-scale visual patterns such as symmetry.
In this paper, we propose to match-three games as a domain to test PCGML algorithms regarding their ability to generate suitable patterns.
arXiv Detail & Related papers (2020-05-26T08:58:37Z) - FLAT: Few-Shot Learning via Autoencoding Transformation Regularizers [67.46036826589467]
We present a novel regularization mechanism by learning the change of feature representations induced by a distribution of transformations without using the labels of data examples.
It could minimize the risk of overfitting into base categories by inspecting the transformation-augmented variations at the encoded feature level.
Experiment results show the superior performances to the current state-of-the-art methods in literature.
arXiv Detail & Related papers (2019-12-29T15:26:28Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.