Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
- URL: http://arxiv.org/abs/2310.13724v1
- Date: Thu, 19 Oct 2023 17:29:17 GMT
- Title: Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
- Authors: Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote,
Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander William Clegg, Michal
Hlavac, So Yeon Min, Vladim\'ir Vondru\v{s}, Theophile Gervet, Vincent-Pierre
Berges, John M. Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan,
Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai,
Roozbeh Mottaghi
- Abstract summary: Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
- Score: 119.55240471433302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Habitat 3.0: a simulation platform for studying collaborative
human-robot tasks in home environments. Habitat 3.0 offers contributions across
three dimensions: (1) Accurate humanoid simulation: addressing challenges in
modeling complex deformable bodies and diversity in appearance and motion, all
while ensuring high simulation speed. (2) Human-in-the-loop infrastructure:
enabling real human interaction with simulated robots via mouse/keyboard or a
VR interface, facilitating evaluation of robot policies with human input. (3)
Collaborative tasks: studying two collaborative tasks, Social Navigation and
Social Rearrangement. Social Navigation investigates a robot's ability to
locate and follow humanoid avatars in unseen environments, whereas Social
Rearrangement addresses collaboration between a humanoid and robot while
rearranging a scene. These contributions allow us to study end-to-end learned
and heuristic baselines for human-robot collaboration in-depth, as well as
evaluate them with humans in the loop. Our experiments demonstrate that learned
robot policies lead to efficient task completion when collaborating with unseen
humanoid agents and human partners that might exhibit behaviors that the robot
has not seen before. Additionally, we observe emergent behaviors during
collaborative task execution, such as the robot yielding space when obstructing
a humanoid agent, thereby allowing the effective completion of the task by the
humanoid agent. Furthermore, our experiments using the human-in-the-loop tool
demonstrate that our automated evaluation with humanoids can provide an
indication of the relative ordering of different policies when evaluated with
real human collaborators. Habitat 3.0 unlocks interesting new features in
simulators for Embodied AI, and we hope it paves the way for a new frontier of
embodied human-AI interaction capabilities.
Related papers
- HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction [9.806227900768926]
We propose to model social motion forecasting in a shared human-robot representation space.
ECHO operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios.
We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2024-02-07T11:37:14Z) - SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot
Handovers [37.49601724575655]
Vision-based human-to-robot handover is an important and challenging task in human-robot interaction.
We introduce a framework that can generate plausible human grasping motions suitable for training the robot.
This allows us to generate synthetic training and testing data with 100x more objects than previous work.
arXiv Detail & Related papers (2023-11-09T18:57:02Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Open-VICO: An Open-Source Gazebo Toolkit for Multi-Camera-based Skeleton
Tracking in Human-Robot Collaboration [0.0]
This work presents Open-VICO, an open-source toolkit to integrate virtual human models in Gazebo.
In particular, Open-VICO allows to combine in the same simulation environment realistic human kinematic models, multi-camera vision setups, and human-tracking techniques.
arXiv Detail & Related papers (2022-03-28T13:21:32Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.