Using Fractal Neural Networks to Play SimCity 1 and Conway's Game of
Life at Variable Scales
- URL: http://arxiv.org/abs/2002.03896v1
- Date: Wed, 29 Jan 2020 19:10:31 GMT
- Title: Using Fractal Neural Networks to Play SimCity 1 and Conway's Game of
Life at Variable Scales
- Authors: Sam Earle
- Abstract summary: Gym-city is a Reinforcement Learning environment that uses SimCity 1's game engine to simulate an urban environment.
We focus on population, and analyze our agents' ability to generalize to larger map-sizes than those seen during training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce gym-city, a Reinforcement Learning environment that uses SimCity
1's game engine to simulate an urban environment, wherein agents might seek to
optimize one or a combination of any number of city-wide metrics, on gameboards
of various sizes. We focus on population, and analyze our agents' ability to
generalize to larger map-sizes than those seen during training. The environment
is interactive, allowing a human player to build alongside agents during
training and inference, potentially influencing the course of their learning,
or manually probing and evaluating their performance. To test our agents'
ability to capture distance-agnostic relationships between elements of the
gameboard, we design a minigame within the environment which is, by design,
unsolvable at large enough scales given strictly local strategies. Given the
game engine's extensive use of Cellular Automata, we also train our agents to
"play" Conway's Game of Life -- again optimizing for population -- and examine
their behaviour at multiple scales. To make our models compatible with
variable-scale gameplay, we use Neural Networks with recursive weights and
structure -- fractals to be truncated at different depths, dependent upon the
size of the gameboard.
Related papers
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models [63.89598561397856]
We present a system for quadrupedal mobile manipulation in indoor environments.
It uses a front-mounted gripper for object manipulation, a low-level controller trained in simulation using egocentric depth for agile skills.
We evaluate our system in two unseen environments without any real-world data collection or training.
arXiv Detail & Related papers (2024-09-30T20:58:38Z) - Autoverse: An Evolvable Game Language for Learning Robust Embodied Agents [2.624282086797512]
We introduce Autoverse, an evolvable, domain-specific language for single-player 2D grid-based games.
We demonstrate its use as a scalable training ground for Open-Ended Learning (OEL) algorithms.
arXiv Detail & Related papers (2024-07-05T02:18:02Z) - Scaling Laws for Imitation Learning in Single-Agent Games [29.941613597833133]
We investigate whether carefully scaling up model and data size can bring similar improvements in the imitation learning setting for single-agent games.
We first demonstrate our findings on a variety of Atari games, and thereafter focus on the extremely challenging game of NetHack.
We find that IL loss and mean return scale smoothly with the compute budget and are strongly correlated, resulting in power laws for training compute-optimal IL agents.
arXiv Detail & Related papers (2023-07-18T16:43:03Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.