PCGPT: Procedural Content Generation via Transformers
- URL: http://arxiv.org/abs/2310.02405v1
- Date: Tue, 3 Oct 2023 19:58:02 GMT
- Title: PCGPT: Procedural Content Generation via Transformers
- Authors: Sajad Mohaghegh, Mohammad Amin Ramezan Dehnavi, Golnoosh
Abdollahinejad, Matin Hashemi
- Abstract summary: The paper presents the PCGPT framework, an innovative approach to procedural content generation (PCG) using offline reinforcement learning and transformer networks.
PCGPT utilizes an autoregressive model based on transformers to generate game levels iteratively, addressing the challenges of traditional PCG methods such as repetitive, predictable, or inconsistent content.
- Score: 1.515687944002438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper presents the PCGPT framework, an innovative approach to procedural
content generation (PCG) using offline reinforcement learning and transformer
networks. PCGPT utilizes an autoregressive model based on transformers to
generate game levels iteratively, addressing the challenges of traditional PCG
methods such as repetitive, predictable, or inconsistent content. The framework
models trajectories of actions, states, and rewards, leveraging the
transformer's self-attention mechanism to capture temporal dependencies and
causal relationships. The approach is evaluated in the Sokoban puzzle game,
where the model predicts items that are needed with their corresponding
locations. Experimental results on the game Sokoban demonstrate that PCGPT
generates more complex and diverse game content. Interestingly, it achieves
these results in significantly fewer steps compared to existing methods,
showcasing its potential for enhancing game design and online content
generation. Our model represents a new PCG paradigm which outperforms previous
methods.
Related papers
- Converting Transformers to Polynomial Form for Secure Inference Over
Homomorphic Encryption [45.00129952368691]
Homomorphic Encryption (HE) has emerged as one of the most promising approaches in deep learning.
We introduce the first transformer, providing the first demonstration of secure inference over HE with transformers.
Our models yield results comparable to traditional methods, bridging the performance gap with transformers of similar scale and underscoring the viability of HE for state-of-the-art applications.
arXiv Detail & Related papers (2023-11-15T00:23:58Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Procedural Content Generation via Knowledge Transformation (PCG-KT) [8.134009219520289]
We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT)
Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge.
arXiv Detail & Related papers (2023-05-01T03:31:22Z) - A Survey on Transformers in Reinforcement Learning [66.23773284875843]
Transformer has been considered the dominating neural architecture in NLP and CV, mostly under supervised settings.
Recently, a similar surge of using Transformers has appeared in the domain of reinforcement learning (RL), but it is faced with unique design choices and challenges brought by the nature of RL.
This paper systematically reviews motivations and progress on using Transformers in RL, provide a taxonomy on existing works, discuss each sub-field, and summarize future prospects.
arXiv Detail & Related papers (2023-01-08T14:04:26Z) - Transformers learn in-context by gradient descent [58.24152335931036]
Training Transformers on auto-regressive objectives is closely related to gradient-based meta-learning formulations.
We show how trained Transformers become mesa-optimizers i.e. learn models by gradient descent in their forward pass.
arXiv Detail & Related papers (2022-12-15T09:21:21Z) - Decision Transformer: Reinforcement Learning via Sequence Modeling [102.86873656751489]
We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
We present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling.
Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
arXiv Detail & Related papers (2021-06-02T17:53:39Z) - Combining Transformer Generators with Convolutional Discriminators [9.83490307808789]
Recently proposed TransGAN is the first GAN using only transformer-based architectures.
TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism.
We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results.
arXiv Detail & Related papers (2021-05-21T07:56:59Z) - Visformer: The Vision-friendly Transformer [105.52122194322592]
We propose a new architecture named Visformer, which is abbreviated from the Vision-friendly Transformer'
With the same computational complexity, Visformer outperforms both the Transformer-based and convolution-based models in terms of ImageNet classification accuracy.
arXiv Detail & Related papers (2021-04-26T13:13:03Z) - Visual Saliency Transformer [127.33678448761599]
We develop a novel unified model based on a pure transformer, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD)
It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches.
Experimental results show that our model outperforms existing state-of-the-art results on both RGB and RGB-D SOD benchmark datasets.
arXiv Detail & Related papers (2021-04-25T08:24:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.