Megaverse: Simulating Embodied Agents at One Million Experiences per
Second
- URL: http://arxiv.org/abs/2107.08170v2
- Date: Wed, 21 Jul 2021 03:17:43 GMT
- Title: Megaverse: Simulating Embodied Agents at One Million Experiences per
Second
- Authors: Aleksei Petrenko, Erik Wijmans, Brennan Shacklett, Vladlen Koltun
- Abstract summary: We present Megaverse, a new 3D simulation platform for reinforcement learning and embodied AI research.
Megaverse is up to 70x faster than DeepMind Lab in fully-shaded 3D scenes with interactive objects.
We use Megaverse to build a new benchmark that consists of several single-agent and multi-agent tasks.
- Score: 75.1191260838366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Megaverse, a new 3D simulation platform for reinforcement learning
and embodied AI research. The efficient design of our engine enables
physics-based simulation with high-dimensional egocentric observations at more
than 1,000,000 actions per second on a single 8-GPU node. Megaverse is up to
70x faster than DeepMind Lab in fully-shaded 3D scenes with interactive
objects. We achieve this high simulation performance by leveraging batched
simulation, thereby taking full advantage of the massive parallelism of modern
GPUs. We use Megaverse to build a new benchmark that consists of several
single-agent and multi-agent tasks covering a variety of cognitive challenges.
We evaluate model-free RL on this benchmark to provide baselines and facilitate
future research. The source code is available at https://www.megaverse.info
Related papers
- ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI [27.00155119759743]
ManiSkill3 is the fastest state-visual GPU parallelized robotics simulator with contact-rich physics targeting generalizable manipulation.
ManiSkill3 supports GPU parallelization of many aspects including simulation+rendering, heterogeneous simulation, pointclouds/voxels visual input, and more.
arXiv Detail & Related papers (2024-10-01T06:10:39Z) - BEHAVIOR-1K: A Human-Centered, Embodied AI Benchmark with 1,000 Everyday Activities and Realistic Simulation [63.42591251500825]
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered robotics.
The first is the definition of 1,000 everyday activities grounded in 50 scenes with more than 9,000 objects annotated with rich physical and semantic properties.
The second is OMNIGIBSON, a novel simulation environment that supports these activities via realistic physics simulation and rendering of rigid bodies, deformable bodies, and liquids.
arXiv Detail & Related papers (2024-03-14T09:48:36Z) - SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World [46.02807945490169]
We show that imitating shortest-path planners in simulation produces agents that can proficiently navigate, explore, and manipulate objects in both simulation and in the real world using only RGB sensors (no depth map or GPS coordinates)
This surprising result is enabled by our end-to-end, transformer-based, SPOC architecture, powerful visual encoders paired with extensive image augmentation.
arXiv Detail & Related papers (2023-12-05T18:59:45Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Sonicverse: A Multisensory Simulation Platform for Embodied Household
Agents that See and Hear [65.33183123368804]
Sonicverse is a multisensory simulation platform with integrated audio-visual simulation.
It enables embodied AI tasks that need audio-visual perception.
An agent trained in Sonicverse can successfully perform audio-visual navigation in real-world environments.
arXiv Detail & Related papers (2023-06-01T17:24:01Z) - Habitat 2.0: Training Home Assistants to Rearrange their Habitat [122.54624752876276]
We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments.
We make contributions to all levels of the embodied AI stack - data, simulation, and benchmark tasks.
arXiv Detail & Related papers (2021-06-28T05:42:15Z) - Out of the Box: Embodied Navigation in the Real World [45.97756658635314]
We show how to transfer knowledge acquired in simulation into the real world.
We deploy our models on a LoCoBot equipped with a single Intel RealSense camera.
Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world.
arXiv Detail & Related papers (2021-05-12T18:00:14Z) - Large Batch Simulation for Deep Reinforcement Learning [101.01408262583378]
We accelerate deep reinforcement learning-based training in visually complex 3D environments by two orders of magnitude over prior work.
We realize end-to-end training speeds of over 19,000 frames of experience per second on a single and up to 72,000 frames per second on a single eight- GPU machine.
By combining batch simulation and performance optimizations, we demonstrate that Point navigation agents can be trained in complex 3D environments on a single GPU in 1.5 days to 97% of the accuracy of agents trained on a prior state-of-the-art system.
arXiv Detail & Related papers (2021-03-12T00:22:50Z) - Multi-GPU SNN Simulation with Perfect Static Load Balancing [0.8360870648463651]
We present a SNN simulator which scales to millions of neurons, billions of synapses, and 8 GPUs.
This is made possible by 1) a novel, cache-aware spike transmission algorithm 2) a model parallel multi- GPU distribution scheme and 3) a static, yet very effective load balancing strategy.
arXiv Detail & Related papers (2021-02-09T07:07:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.