Orbit: A Unified Simulation Framework for Interactive Robot Learning
Environments
- URL: http://arxiv.org/abs/2301.04195v2
- Date: Fri, 16 Feb 2024 13:45:11 GMT
- Title: Orbit: A Unified Simulation Framework for Interactive Robot Learning
Environments
- Authors: Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David
Hoeller, Jia Lin Yuan, Ritvik Singh, Yunrong Guo, Hammad Mazhar, Ajay
Mandlekar, Buck Babich, Gavriel State, Marco Hutter, Animesh Garg
- Abstract summary: We present Orbit, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim.
It offers a modular design to create robotic environments with photo-realistic scenes and high-fidelity rigid and deformable body simulation.
We aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and task and motion planning.
- Score: 38.23943905182543
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present Orbit, a unified and modular framework for robot learning powered
by NVIDIA Isaac Sim. It offers a modular design to easily and efficiently
create robotic environments with photo-realistic scenes and high-fidelity rigid
and deformable body simulation. With Orbit, we provide a suite of benchmark
tasks of varying difficulty -- from single-stage cabinet opening and cloth
folding to multi-stage tasks such as room reorganization. To support working
with diverse observations and action spaces, we include fixed-arm and mobile
manipulators with different physically-based sensors and motion generators.
Orbit allows training reinforcement learning policies and collecting large
demonstration datasets from hand-crafted or expert solutions in a matter of
minutes by leveraging GPU-based parallelization. In summary, we offer an
open-sourced framework that readily comes with 16 robotic platforms, 4 sensor
modalities, 10 motion generators, more than 20 benchmark tasks, and wrappers to
4 learning libraries. With this framework, we aim to support various research
areas, including representation learning, reinforcement learning, imitation
learning, and task and motion planning. We hope it helps establish
interdisciplinary collaborations in these communities, and its modularity makes
it easily extensible for more tasks and applications in the future.
Related papers
- M3Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes [66.44171200767839]
We propose M3Bench, a new benchmark of whole-body motion generation for mobile manipulation tasks.
M3Bench requires an embodied agent to understand its configuration, environmental constraints and task objectives.
M3Bench features 30k object rearrangement tasks across 119 diverse scenes, providing expert demonstrations generated by our newly developed M3BenchMaker.
arXiv Detail & Related papers (2024-10-09T08:38:21Z) - 1 Modular Parallel Manipulator for Long-Term Soft Robotic Data Collection [16.103025868841268]
We propose a modular parallel robotic manipulation platform suitable for large-scale data collection.
The platform's modules consist of a pair of off-the-shelf electrical motors which actuate a customizable finger.
We validate the platform's ability to be used for policy gradient reinforcement learning directly on hardware in a benchmark 2D manipulation task.
arXiv Detail & Related papers (2024-09-05T15:18:44Z) - HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [7.01404330241523]
HYPERmotion is a framework that learns, selects and plans behaviors based on tasks in different scenarios.
We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints.
Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks.
arXiv Detail & Related papers (2024-06-20T18:21:24Z) - FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon
Complex Manipulation [16.690318684271894]
Reinforcement learning (RL), imitation learning (IL), and task and motion planning (TAMP) have demonstrated impressive performance across various robotic manipulation tasks.
We propose to focus on real-world furniture assembly, a complex, long-horizon robot manipulation task.
We present FurnitureBench, a reproducible real-world furniture assembly benchmark.
arXiv Detail & Related papers (2023-05-22T08:29:00Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - iGibson, a Simulation Environment for Interactive Tasks in Large
Realistic Scenes [54.04456391489063]
iGibson is a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.
Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects.
iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors.
arXiv Detail & Related papers (2020-12-05T02:14:17Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.