Gesture Recognition for Initiating Human-to-Robot Handovers
- URL: http://arxiv.org/abs/2007.09945v2
- Date: Wed, 30 Dec 2020 07:51:16 GMT
- Title: Gesture Recognition for Initiating Human-to-Robot Handovers
- Authors: Jun Kwan, Chinkye Tan and Akansel Cosgun
- Abstract summary: It is important to recognize when a human intends to initiate handovers, so that the robot does not try to take objects from humans when a handover is not intended.
We pose the handover gesture recognition as a binary classification problem in a single RGB image.
Our results show that the handover gestures are correctly identified with an accuracy of over 90%.
- Score: 2.1614262520734595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-to-Robot handovers are useful for many Human-Robot Interaction
scenarios. It is important to recognize when a human intends to initiate
handovers, so that the robot does not try to take objects from humans when a
handover is not intended. We pose the handover gesture recognition as a binary
classification problem in a single RGB image. Three separate neural network
modules for detecting the object, human body key points and head orientation,
are implemented to extract relevant features from the RGB images, and then the
feature vectors are passed into a deep neural net to perform binary
classification. Our results show that the handover gestures are correctly
identified with an accuracy of over 90%. The abstraction of the features makes
our approach modular and generalizable to different objects and human body
types.
Related papers
- Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Robot to Human Object Handover using Vision and Joint Torque Sensor
Modalities [3.580924916641143]
The system performs a fully autonomous and robust object handover to a human receiver in real-time.
Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback.
Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98% accuracy.
arXiv Detail & Related papers (2022-10-27T00:11:34Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Human keypoint detection for close proximity human-robot interaction [29.99153271571971]
We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction.
The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection.
We propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection.
arXiv Detail & Related papers (2022-07-15T20:33:29Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision [6.089651609511804]
We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation.
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
arXiv Detail & Related papers (2020-06-02T17:29:20Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.