Learning to Simulate Dynamic Environments with GameGAN
- URL: http://arxiv.org/abs/2005.12126v1
- Date: Mon, 25 May 2020 14:10:17 GMT
- Title: Learning to Simulate Dynamic Environments with GameGAN
- Authors: Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja
Fidler
- Abstract summary: In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
- Score: 109.25308647431952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation is a crucial component of any robotic system. In order to simulate
correctly, we need to write complex rules of the environment: how dynamic
agents behave, and how the actions of each of the agents affect the behavior of
others. In this paper, we aim to learn a simulator by simply watching an agent
interact with an environment. We focus on graphics games as a proxy of the real
environment. We introduce GameGAN, a generative model that learns to visually
imitate a desired game by ingesting screenplay and keyboard actions during
training. Given a key pressed by the agent, GameGAN "renders" the next screen
using a carefully designed generative adversarial network. Our approach offers
key advantages over existing work: we design a memory module that builds an
internal map of the environment, allowing for the agent to return to previously
visited locations with high visual consistency. In addition, GameGAN is able to
disentangle static and dynamic components within an image making the behavior
of the model more interpretable, and relevant for downstream tasks that require
explicit reasoning over dynamic elements. This enables many interesting
applications such as swapping different components of the game to build new
games that do not exist.
Related papers
- Closed Loop Interactive Embodied Reasoning for Robot Manipulation [17.732550906162192]
Embodied reasoning systems integrate robotic hardware and cognitive processes to perform complex tasks.
We introduce a new simulating environment that makes use of MuJoCo physics engine and high-quality Blender.
We propose a new benchmark composed of 10 classes of multi-step reasoning scenarios that require simultaneous visual and physical measurements.
arXiv Detail & Related papers (2024-04-23T16:33:28Z) - Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video [23.484070818399]
Video2Game is a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments.
We show that we can not only produce highly-realistic renderings in real-time, but also build interactive games on top.
arXiv Detail & Related papers (2024-04-15T14:32:32Z) - Scaling Instructable Agents Across Many Simulated Worlds [70.97268311053328]
Our goal is to develop an agent that can accomplish anything a human can do in any simulated 3D environment.
Our approach focuses on language-driven generality while imposing minimal assumptions.
Our agents interact with environments in real-time using a generic, human-like interface.
arXiv Detail & Related papers (2024-03-13T17:50:32Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - iGibson, a Simulation Environment for Interactive Tasks in Large
Realistic Scenes [54.04456391489063]
iGibson is a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.
Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects.
iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors.
arXiv Detail & Related papers (2020-12-05T02:14:17Z) - Using Fractal Neural Networks to Play SimCity 1 and Conway's Game of
Life at Variable Scales [0.0]
Gym-city is a Reinforcement Learning environment that uses SimCity 1's game engine to simulate an urban environment.
We focus on population, and analyze our agents' ability to generalize to larger map-sizes than those seen during training.
arXiv Detail & Related papers (2020-01-29T19:10:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.