Generating Gameplay-Relevant Art Assets with Transfer Learning
- URL: http://arxiv.org/abs/2010.01681v1
- Date: Sun, 4 Oct 2020 20:58:40 GMT
- Title: Generating Gameplay-Relevant Art Assets with Transfer Learning
- Authors: Adrian Gonzalez, Matthew Guzdial and Felix Ramos
- Abstract summary: We propose a Convolutional Variational Autoencoder (CVAE) system to modify and generate new game visuals based on gameplay relevance.
Our experimental results indicate that adopting a transfer learning approach can help to improve visual quality and stability over unseen data.
- Score: 0.8164433158925593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In game development, designing compelling visual assets that convey
gameplay-relevant features requires time and experience. Recent image
generation methods that create high-quality content could reduce development
costs, but these approaches do not consider game mechanics. We propose a
Convolutional Variational Autoencoder (CVAE) system to modify and generate new
game visuals based on their gameplay relevance. We test this approach with
Pok\'emon sprites and Pok\'emon type information, since types are one of the
game's core mechanics and they directly impact the game's visuals. Our
experimental results indicate that adopting a transfer learning approach can
help to improve visual quality and stability over unseen data.
Related papers
- Level Up Your Tutorials: VLMs for Game Tutorials Quality Assessment [4.398130586098371]
evaluating the effectiveness of tutorials usually requires multiple iterations with testers who have no prior knowledge of the game.
Recent Vision-Language Models (VLMs) have demonstrated significant capabilities in understanding and interpreting visual content.
We propose an automated game-testing solution to evaluate the quality of game tutorials.
arXiv Detail & Related papers (2024-08-15T19:46:21Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Towards General Game Representations: Decomposing Games Pixels into
Content and Style [2.570570340104555]
Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
arXiv Detail & Related papers (2023-07-20T17:53:04Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Game State Learning via Game Scene Augmentation [2.570570340104555]
We introduce a new game scene augmentation technique -- named GameCLR -- that takes advantage of the game-engine to define and synthesize specific, highly-controlled renderings of different game states.
Our results suggest that GameCLR can infer the game's state information from game footage more accurately compared to the baseline.
arXiv Detail & Related papers (2022-07-04T09:40:14Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Level generation and style enhancement -- deep learning for game
development overview [0.0]
We present seven approaches to create level maps, each using statistical methods, machine learning, or deep learning.
We aim to present new possibilities for game developers and level artists.
arXiv Detail & Related papers (2021-07-15T15:24:43Z) - Unsupervised Visual Representation Learning by Tracking Patches in Video [88.56860674483752]
We propose to use tracking as a proxy task for a computer vision system to learn the visual representations.
Modelled on the Catch game played by the children, we design a Catch-the-Patch (CtP) game for a 3D-CNN model to learn visual representations.
arXiv Detail & Related papers (2021-05-06T09:46:42Z) - Entity Embedding as Game Representation [0.9645196221785693]
We present an autoencoder for deriving what we call "entity embeddings"
In this paper we introduce the learned representation, along with some evidence towards its quality and future utility.
arXiv Detail & Related papers (2020-10-04T21:16:45Z) - Disentangling Controllable Object through Video Prediction Improves
Visual Reinforcement Learning [82.25034245150582]
In many vision-based reinforcement learning problems, the agent controls a movable object in its visual field.
We propose an end-to-end learning framework to disentangle the controllable object from the observation signal.
The disentangled representation is shown to be useful for RL as additional observation channels to the agent.
arXiv Detail & Related papers (2020-02-21T05:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.