RoboTHOR: An Open Simulation-to-Real Embodied AI Platform
- URL: http://arxiv.org/abs/2004.06799v1
- Date: Tue, 14 Apr 2020 20:52:49 GMT
- Title: RoboTHOR: An Open Simulation-to-Real Embodied AI Platform
- Authors: Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric
Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt,
Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi
- Abstract summary: We introduce RoboTHOR to democratize research in interactive and embodied visual AI.
We show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs.
- Score: 56.50243383294621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual recognition ecosystems (e.g. ImageNet, Pascal, COCO) have undeniably
played a prevailing role in the evolution of modern computer vision. We argue
that interactive and embodied visual AI has reached a stage of development
similar to visual recognition prior to the advent of these ecosystems.
Recently, various synthetic environments have been introduced to facilitate
research in embodied AI. Notwithstanding this progress, the crucial question of
how well models trained in simulation generalize to reality has remained
largely unanswered. The creation of a comparable ecosystem for
simulation-to-real embodied AI presents many challenges: (1) the inherently
interactive nature of the problem, (2) the need for tight alignments between
real and simulated worlds, (3) the difficulty of replicating physical
conditions for repeatable experiments, (4) and the associated cost. In this
paper, we introduce RoboTHOR to democratize research in interactive and
embodied visual AI. RoboTHOR offers a framework of simulated environments
paired with physical counterparts to systematically explore and overcome the
challenges of simulation-to-real transfer, and a platform where researchers
across the globe can remotely test their embodied models in the physical world.
As a first benchmark, our experiments show there exists a significant gap
between the performance of models trained in simulation when they are tested in
both simulations and their carefully constructed physical analogs. We hope that
RoboTHOR will spur the next stage of evolution in embodied computer vision.
RoboTHOR can be accessed at the following link:
https://ai2thor.allenai.org/robothor
Related papers
- EAGERx: Graph-Based Framework for Sim2real Robot Learning [9.145895178276822]
Sim2real, that is, the transfer of learned control policies from simulation to real world, is an area of growing interest in robotics.
We introduce EAGERx, a framework with a unified software pipeline for both real and simulated robot learning.
arXiv Detail & Related papers (2024-07-05T08:01:19Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z) - Inferring Articulated Rigid Body Dynamics from RGBD Video [18.154013621342266]
We introduce a pipeline that combines inverse rendering with differentiable simulation to create digital twins of real-world articulated mechanisms.
Our approach accurately reconstructs the kinematic tree of an articulated mechanism being manipulated by a robot.
arXiv Detail & Related papers (2022-03-20T08:19:02Z) - An in-depth experimental study of sensor usage and visual reasoning of
robots navigating in real environments [20.105395754497202]
We study the performance and reasoning capacities of real physical agents, trained in simulation and deployed to two different physical environments.
We show, that for the PointGoal task, an agent pre-trained on wide variety of tasks and fine-tuned on a simulated version of the target environment can reach competitive performance without modelling any sim2real transfer.
arXiv Detail & Related papers (2021-11-29T16:27:29Z) - Adaptive Synthetic Characters for Military Training [0.9802137009065037]
Behaviors of synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models.
This paper introduces a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior.
arXiv Detail & Related papers (2021-01-06T18:45:48Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.