Game State Learning via Game Scene Augmentation
- URL: http://arxiv.org/abs/2207.01289v1
- Date: Mon, 4 Jul 2022 09:40:14 GMT
- Title: Game State Learning via Game Scene Augmentation
- Authors: Chintan Trivedi, Konstantinos Makantasis, Antonios Liapis, Georgios N.
Yannakakis
- Abstract summary: We introduce a new game scene augmentation technique -- named GameCLR -- that takes advantage of the game-engine to define and synthesize specific, highly-controlled renderings of different game states.
Our results suggest that GameCLR can infer the game's state information from game footage more accurately compared to the baseline.
- Score: 2.570570340104555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Having access to accurate game state information is of utmost importance for
any game artificial intelligence task including game-playing, testing, player
modeling, and procedural content generation. Self-Supervised Learning (SSL)
techniques have shown to be capable of inferring accurate game state
information from the high-dimensional pixel input of game's rendering into
compressed latent representations. Contrastive Learning is one such popular
paradigm of SSL where the visual understanding of the game's images comes from
contrasting dissimilar and similar game states defined by simple image
augmentation methods. In this study, we introduce a new game scene augmentation
technique -- named GameCLR -- that takes advantage of the game-engine to define
and synthesize specific, highly-controlled renderings of different game states,
thereby, boosting contrastive learning performance. We test our GameCLR
contrastive learning technique on images of the CARLA driving simulator
environment and compare it against the popular SimCLR baseline SSL method. Our
results suggest that GameCLR can infer the game's state information from game
footage more accurately compared to the baseline. The introduced approach
allows us to conduct game artificial intelligence research by directly
utilizing screen pixels as input.
Related papers
- Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Accelerate Multi-Agent Reinforcement Learning in Zero-Sum Games with
Subgame Curriculum Learning [65.36326734799587]
We present a novel subgame curriculum learning framework for zero-sum games.
It adopts an adaptive initial state distribution by resetting agents to some previously visited states.
We derive a subgame selection metric that approximates the squared distance to NE values.
arXiv Detail & Related papers (2023-10-07T13:09:37Z) - Towards General Game Representations: Decomposing Games Pixels into
Content and Style [2.570570340104555]
Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
arXiv Detail & Related papers (2023-07-20T17:53:04Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Learning Task-Independent Game State Representations from Unlabeled
Images [2.570570340104555]
Self-supervised learning (SSL) techniques have been widely used to learn compact and informative representations from complex data.
This paper investigates whether SSL methods can be leveraged for the task of learning accurate state representations of games.
arXiv Detail & Related papers (2022-06-13T21:37:58Z) - Unified Contrastive Learning in Image-Text-Label Space [130.31947133453406]
Unified Contrastive Learning (UniCL) is effective way of learning semantically rich yet discriminative representations.
UniCL stand-alone is a good learner on pure imagelabel data, rivaling the supervised learning methods across three image classification datasets.
arXiv Detail & Related papers (2022-04-07T17:34:51Z) - Contrastive Learning of Generalized Game Representations [2.323282558557423]
Representing games through their pixels offers a promising approach for building general-purpose and versatile game models.
While games are not merely images, neural network models trained on game pixels often capture differences of the visual style of the image rather than the content of the game.
In this paper we build on recent advances in contrastive learning and showcase its benefits for representation learning in games.
arXiv Detail & Related papers (2021-06-18T11:17:54Z) - Generating Gameplay-Relevant Art Assets with Transfer Learning [0.8164433158925593]
We propose a Convolutional Variational Autoencoder (CVAE) system to modify and generate new game visuals based on gameplay relevance.
Our experimental results indicate that adopting a transfer learning approach can help to improve visual quality and stability over unseen data.
arXiv Detail & Related papers (2020-10-04T20:58:40Z) - Acceleration of Actor-Critic Deep Reinforcement Learning for Visual
Grasping in Clutter by State Representation Learning Based on Disentanglement
of a Raw Input Image [4.970364068620608]
Actor-critic deep reinforcement learning (RL) methods typically perform very poorly when grasping diverse objects.
We employ state representation learning (SRL), where we encode essential information first for subsequent use in RL.
We found that preprocessing based on the disentanglement of a raw input image is the key to effectively capturing a compact representation.
arXiv Detail & Related papers (2020-02-27T03:58:51Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.