Entity Embedding as Game Representation
- URL: http://arxiv.org/abs/2010.01685v1
- Date: Sun, 4 Oct 2020 21:16:45 GMT
- Title: Entity Embedding as Game Representation
- Authors: Nazanin Yousefzadeh Khameneh and Matthew Guzdial
- Abstract summary: We present an autoencoder for deriving what we call "entity embeddings"
In this paper we introduce the learned representation, along with some evidence towards its quality and future utility.
- Score: 0.9645196221785693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural content generation via machine learning (PCGML) has shown success
at producing new video game content with machine learning. However, the
majority of the work has focused on the production of static game content,
including game levels and visual elements. There has been much less work on
dynamic game content, such as game mechanics. One reason for this is the lack
of a consistent representation for dynamic game content, which is key for a
number of statistical machine learning approaches. We present an autoencoder
for deriving what we call "entity embeddings", a consistent way to represent
different dynamic entities across multiple games in the same representation. In
this paper we introduce the learned representation, along with some evidence
towards its quality and future utility.
Related papers
- Unbounded: A Generative Infinite Game of Character Life Simulation [68.37260000219479]
We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
arXiv Detail & Related papers (2024-10-24T17:59:31Z) - A Text-to-Game Engine for UGC-Based Role-Playing Games [6.5715027492220734]
This paper introduces a new framework for a text-to-game engine that utilizes foundation models to convert simple textual inputs into complex, interactive RPG experiences.
The engine dynamically renders the game story in a multi-modal format and adjusts the game character, environment, and mechanics in real-time in response to player actions.
arXiv Detail & Related papers (2024-07-11T05:33:19Z) - Towards General Game Representations: Decomposing Games Pixels into
Content and Style [2.570570340104555]
Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
arXiv Detail & Related papers (2023-07-20T17:53:04Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Masked World Models for Visual Control [90.13638482124567]
We introduce a visual model-based RL framework that decouples visual representation learning and dynamics learning.
We demonstrate that our approach achieves state-of-the-art performance on a variety of visual robotic tasks.
arXiv Detail & Related papers (2022-06-28T18:42:27Z) - Learning Task-Independent Game State Representations from Unlabeled
Images [2.570570340104555]
Self-supervised learning (SSL) techniques have been widely used to learn compact and informative representations from complex data.
This paper investigates whether SSL methods can be leveraged for the task of learning accurate state representations of games.
arXiv Detail & Related papers (2022-06-13T21:37:58Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Generating Gameplay-Relevant Art Assets with Transfer Learning [0.8164433158925593]
We propose a Convolutional Variational Autoencoder (CVAE) system to modify and generate new game visuals based on gameplay relevance.
Our experimental results indicate that adopting a transfer learning approach can help to improve visual quality and stability over unseen data.
arXiv Detail & Related papers (2020-10-04T20:58:40Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.