Human Grasp Classification for Reactive Human-to-Robot Handovers
- URL: http://arxiv.org/abs/2003.06000v1
- Date: Thu, 12 Mar 2020 19:58:03 GMT
- Title: Human Grasp Classification for Reactive Human-to-Robot Handovers
- Authors: Wei Yang, Chris Paxton, Maya Cakmak, Dieter Fox
- Abstract summary: We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
- Score: 50.91803283297065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer of objects between humans and robots is a critical capability for
collaborative robots. Although there has been a recent surge of interest in
human-robot handovers, most prior research focus on robot-to-human handovers.
Further, work on the equally critical human-to-robot handovers often assumes
humans can place the object in the robot's gripper. In this paper, we propose
an approach for human-to-robot handovers in which the robot meets the human
halfway, by classifying the human's grasp of the object and quickly planning a
trajectory accordingly to take the object from the human's hand according to
their intent. To do this, we collect a human grasp dataset which covers typical
ways of holding objects with various hand shapes and poses, and learn a deep
model on this dataset to classify the hand grasps into one of these categories.
We present a planning and execution approach that takes the object from the
human hand according to the detected grasp and hand position, and replans as
necessary when the handover is interrupted. Through a systematic evaluation, we
demonstrate that our system results in more fluent handovers versus two
baselines. We also present findings from a user study (N = 9) demonstrating the
effectiveness and usability of our approach with naive users in different
scenarios. More results and videos can be found at http://wyang.me/handovers.
Related papers
- ContactHandover: Contact-Guided Robot-to-Human Object Handover [23.093164853009547]
We propose a robot to human handover system that consists of two phases: a contact-guided grasping phase and an object delivery phase.
During the grasping phase, ContactHandover predicts both 6-DoF robot grasp poses and a 3D affordance map of human contact points on the object.
During the delivery phase, the robot end effector pose is computed by maximizing human contact points close to the human while minimizing the human arm joint torques and displacements.
arXiv Detail & Related papers (2024-04-01T18:12:09Z) - InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions [7.574421886354134]
InteRACT architecture pre-trains a conditional intent prediction model on large human-human datasets and fine-tunes on a small human-robot dataset.
We evaluate on a set of real-world collaborative human-robot manipulation tasks and show that our conditional model improves over various marginal baselines.
arXiv Detail & Related papers (2023-11-21T19:15:17Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision [6.089651609511804]
We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation.
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
arXiv Detail & Related papers (2020-06-02T17:29:20Z) - Human-robot co-manipulation of extended objects: Data-driven models and
control from analysis of human-human dyads [2.7036498789349244]
We use data from human-human dyad experiments to determine motion intent which we use for a physical human-robot co-manipulation task.
We develop a deep neural network based on motion data from human-human trials to predict human intent based on past motion.
arXiv Detail & Related papers (2020-01-03T21:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.