Open-VICO: An Open-Source Gazebo Toolkit for Multi-Camera-based Skeleton
Tracking in Human-Robot Collaboration
- URL: http://arxiv.org/abs/2203.14733v1
- Date: Mon, 28 Mar 2022 13:21:32 GMT
- Title: Open-VICO: An Open-Source Gazebo Toolkit for Multi-Camera-based Skeleton
Tracking in Human-Robot Collaboration
- Authors: Luca Fortini (1), Mattia Leonori (1), Juan M. Gandarias (1), Arash
Ajoudani (1) ((1) Human-Robot Interfaces and Physical Interaction, Istituto
Italiano di Tecnologia)
- Abstract summary: This work presents Open-VICO, an open-source toolkit to integrate virtual human models in Gazebo.
In particular, Open-VICO allows to combine in the same simulation environment realistic human kinematic models, multi-camera vision setups, and human-tracking techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation tools are essential for robotics research, especially for those
domains in which safety is crucial, such as Human-Robot Collaboration (HRC).
However, it is challenging to simulate human behaviors, and existing robotics
simulators do not integrate functional human models. This work presents
Open-VICO~\footnote{\url{https://gitlab.iit.it/hrii-public/open-vico}}, an
open-source toolkit to integrate virtual human models in Gazebo focusing on
vision-based human tracking. In particular, Open-VICO allows to combine in the
same simulation environment realistic human kinematic models, multi-camera
vision setups, and human-tracking techniques along with numerous robot and
sensor models thanks to Gazebo. The possibility to incorporate pre-recorded
human skeleton motion with Motion Capture systems broadens the landscape of
human performance behavioral analysis within Human-Robot Interaction (HRI)
settings. To describe the functionalities and stress the potential of the
toolkit four specific examples, chosen among relevant literature challenges in
the field, are developed using our simulation utils: i) 3D multi-RGB-D camera
calibration in simulation, ii) creation of a synthetic human skeleton tracking
dataset based on OpenPose, iii) multi-camera scenario for human skeleton
tracking in simulation, and iv) a human-robot interaction example. The key of
this work is to create a straightforward pipeline which we hope will motivate
research on new vision-based algorithms and methodologies for lightweight
human-tracking and flexible human-robot applications.
Related papers
- Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Enhanced Human-Robot Collaboration using Constrained Probabilistic
Human-Motion Prediction [5.501477817904299]
We propose a novel human motion prediction framework that incorporates human joint constraints and scene constraints.
It is tested on a human arm kinematic model and implemented on a human-robot collaborative setup with a UR5 robot arm.
arXiv Detail & Related papers (2023-10-05T05:12:14Z) - Action-conditioned Deep Visual Prediction with RoAM, a new Indoor Human
Motion Dataset for Autonomous Robots [1.7778609937758327]
We introduce the Robot Autonomous Motion (RoAM) video dataset.
It is collected with a custom-made turtlebot3 Burger robot in a variety of indoor environments recording various human motions from the robot's ego-vision.
The dataset also includes synchronized records of the LiDAR scan and all control actions taken by the robot as it navigates around static and moving human agents.
arXiv Detail & Related papers (2023-06-28T00:58:44Z) - ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and
Human-Robot Interaction [17.093672006793984]
We present the ROS-PyBullet Interface, a framework that provides a bridge between the reliable contact/impact simulator PyBullet and the Robot Operating System (ROS)
Furthermore, we provide additional utilities for facilitating Human-Robot Interaction (HRI) in the simulated environment.
arXiv Detail & Related papers (2022-10-13T10:31:36Z) - BEHAVE: Dataset and Method for Tracking Human Object Interactions [105.77368488612704]
We present the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them.
We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup.
arXiv Detail & Related papers (2022-04-14T13:21:19Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - iGibson, a Simulation Environment for Interactive Tasks in Large
Realistic Scenes [54.04456391489063]
iGibson is a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.
Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects.
iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors.
arXiv Detail & Related papers (2020-12-05T02:14:17Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.