Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision
- URL: http://arxiv.org/abs/2006.01797v2
- Date: Mon, 21 Sep 2020 16:40:13 GMT
- Title: Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision
- Authors: Patrick Rosenberger, Akansel Cosgun, Rhys Newbury, Jun Kwan, Valerio
Ortenzi, Peter Corke and Manfred Grafinger
- Abstract summary: We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation.
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
- Score: 6.089651609511804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for safe and object-independent human-to-robot
handovers using real time robotic vision and manipulation. We aim for general
applicability with a generic object detector, a fast grasp selection algorithm
and by using a single gripper-mounted RGB-D camera, hence not relying on
external sensors. The robot is controlled via visual servoing towards the
object of interest. Putting a high emphasis on safety, we use two perception
modules: human body part segmentation and hand/finger segmentation. Pixels that
are deemed to belong to the human are filtered out from candidate grasp poses,
hence ensuring that the robot safely picks the object without colliding with
the human partner. The grasp selection and perception modules run concurrently
in real-time, which allows monitoring of the progress. In experiments with 13
objects, the robot was able to successfully take the object from the human in
81.9% of the trials.
Related papers
- ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Robot to Human Object Handover using Vision and Joint Torque Sensor
Modalities [3.580924916641143]
The system performs a fully autonomous and robust object handover to a human receiver in real-time.
Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback.
Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98% accuracy.
arXiv Detail & Related papers (2022-10-27T00:11:34Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance [0.2062593640149623]
We propose a robotic system which uses robot vision and deep learning to get the required linear and angular velocities.
The novel methodology that we are proposing is accurate in detecting the position of the unique coloured object in any kind of lighting.
arXiv Detail & Related papers (2021-09-06T19:19:47Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Gesture Recognition for Initiating Human-to-Robot Handovers [2.1614262520734595]
It is important to recognize when a human intends to initiate handovers, so that the robot does not try to take objects from humans when a handover is not intended.
We pose the handover gesture recognition as a binary classification problem in a single RGB image.
Our results show that the handover gestures are correctly identified with an accuracy of over 90%.
arXiv Detail & Related papers (2020-07-20T08:49:34Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.