Neural Game Engine: Accurate learning of generalizable forward models
from pixels
- URL: http://arxiv.org/abs/2003.10520v2
- Date: Tue, 31 Mar 2020 20:50:35 GMT
- Title: Neural Game Engine: Accurate learning of generalizable forward models
from pixels
- Authors: Chris Bamford, Simon Lucas
- Abstract summary: This paper introduces the Neural Game Engine, as a way to learn models directly from pixels.
Results on 10 deterministic General Video Game AI games demonstrate competitive performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Access to a fast and easily copied forward model of a game is essential for
model-based reinforcement learning and for algorithms such as Monte Carlo tree
search, and is also beneficial as a source of unlimited experience data for
model-free algorithms. Learning forward models is an interesting and important
challenge in order to address problems where a model is not available. Building
upon previous work on the Neural GPU, this paper introduces the Neural Game
Engine, as a way to learn models directly from pixels. The learned models are
able to generalise to different size game levels to the ones they were trained
on without loss of accuracy. Results on 10 deterministic General Video Game AI
games demonstrate competitive performance, with many of the games models being
learned perfectly both in terms of pixel predictions and reward predictions.
The pre-trained models are available through the OpenAI Gym interface and are
available publicly for future research here:
\url{https://github.com/Bam4d/Neural-Game-Engine}
Related papers
- Predicting Long-horizon Futures by Conditioning on Geometry and Time [49.86180975196375]
We explore the task of generating future sensor observations conditioned on the past.
We leverage the large-scale pretraining of image diffusion models which can handle multi-modality.
We create a benchmark for video prediction on a diverse set of videos spanning indoor and outdoor scenes.
arXiv Detail & Related papers (2024-04-17T16:56:31Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Initializing Models with Larger Ones [76.41561758293055]
We introduce weight selection, a method for initializing smaller models by selecting a subset of weights from a pretrained larger model.
Our experiments demonstrate that weight selection can significantly enhance the performance of small models and reduce their training time.
arXiv Detail & Related papers (2023-11-30T18:58:26Z) - Wrapper Boxes: Faithful Attribution of Model Predictions to Training Data [40.7542543934205]
We propose a "wrapper box'' pipeline: training a neural model as usual and then using its learned feature representation in classic, interpretable models to perform prediction.
Across seven language models of varying sizes, we first show that the predictive performance of wrapper classic models is largely comparable to the original neural models.
Our pipeline thus preserves the predictive performance of neural language models while faithfully attributing classic model decisions to training data.
arXiv Detail & Related papers (2023-11-15T01:50:53Z) - On the Steganographic Capacity of Selected Learning Models [1.0640226829362012]
We consider the question of the steganographic capacity of learning models.
For a wide range of models, we determine the number of low-order bits that can be overwritten.
Of the models tested, the steganographic capacity ranges from 7.04 KB for our LR experiments, to 44.74 MB for InceptionV3.
arXiv Detail & Related papers (2023-08-29T10:41:34Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - What Language Model Architecture and Pretraining Objective Work Best for
Zero-Shot Generalization? [50.84738303888189]
We present a large-scale evaluation of modeling choices and their impact on zero-shot generalization.
We train models with over 5 billion parameters for more than 170 billion tokens.
We find that pretrained causal decoder models can be efficiently adapted into non-causal decoder models.
arXiv Detail & Related papers (2022-04-12T14:19:49Z) - Towards Action Model Learning for Player Modeling [1.9659095632676098]
Player modeling attempts to create a computational model which accurately approximates a player's behavior in a game.
Most player modeling techniques rely on domain knowledge and are not transferable across games.
We present our findings with using action model learning (AML) to learn a player model in a domain-agnostic manner.
arXiv Detail & Related papers (2021-03-09T19:32:30Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.