Robot to Human Object Handover using Vision and Joint Torque Sensor
Modalities
- URL: http://arxiv.org/abs/2210.15085v1
- Date: Thu, 27 Oct 2022 00:11:34 GMT
- Title: Robot to Human Object Handover using Vision and Joint Torque Sensor
Modalities
- Authors: Mohammadhadi Mohandes, Behnam Moradi, Kamal Gupta, Mehran Mehrandezh
- Abstract summary: The system performs a fully autonomous and robust object handover to a human receiver in real-time.
Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback.
Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98% accuracy.
- Score: 3.580924916641143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a robot-to-human object handover algorithm and implement it on a
7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully
autonomous and robust object handover to a human receiver in real-time. Our
algorithm relies on two complementary sensor modalities: joint torque sensors
on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is
entirely implicit, i.e., there is no explicit communication between the robot
and the human receiver. Information obtained via the aforementioned sensor
modalities is used as inputs to their related deep neural networks. While the
torque sensor network detects the human receiver's "intention" such as: pull,
hold, or bump, the vision sensor network detects if the receiver's fingers have
wrapped around the object. Networks' outputs are then fused, based on which a
decision is made to either release the object or not. Despite substantive
challenges in sensor feedback synchronization, object, and human hand
detection, our system achieves robust robot-to-human handover with 98\%
accuracy in our preliminary real experiments using human receivers.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - Multimodal Anomaly Detection based on Deep Auto-Encoder for Object Slip
Perception of Mobile Manipulation Robots [22.63980025871784]
The proposed framework integrates heterogeneous data streams collected from various robot sensors, including RGB and depth cameras, a microphone, and a force-torque sensor.
The integrated data is used to train a deep autoencoder to construct latent representations of the multisensory data that indicate the normal status.
Anomalies can then be identified by error scores measured by the difference between the trained encoder's latent values and the latent values of reconstructed input data.
arXiv Detail & Related papers (2024-03-06T09:15:53Z) - HODN: Disentangling Human-Object Feature for HOI Detection [51.48164941412871]
We propose a Human and Object Disentangling Network (HODN) to model the Human-Object Interaction (HOI) relationships explicitly.
Considering that human features are more contributive to interaction, we propose a Human-Guide Linking method to make sure the interaction decoder focuses on the human-centric regions.
Our proposed method achieves competitive performance on both the V-COCO and the HICO-Det Linking datasets.
arXiv Detail & Related papers (2023-08-20T04:12:50Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Human keypoint detection for close proximity human-robot interaction [29.99153271571971]
We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction.
The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection.
We propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection.
arXiv Detail & Related papers (2022-07-15T20:33:29Z) - Body Gesture Recognition to Control a Social Robot [5.557794184787908]
We propose a gesture based language to allow humans to interact with robots using their body in a natural way.
We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network.
arXiv Detail & Related papers (2022-06-15T13:49:22Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Gesture Recognition for Initiating Human-to-Robot Handovers [2.1614262520734595]
It is important to recognize when a human intends to initiate handovers, so that the robot does not try to take objects from humans when a handover is not intended.
We pose the handover gesture recognition as a binary classification problem in a single RGB image.
Our results show that the handover gestures are correctly identified with an accuracy of over 90%.
arXiv Detail & Related papers (2020-07-20T08:49:34Z) - Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision [6.089651609511804]
We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation.
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
arXiv Detail & Related papers (2020-06-02T17:29:20Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.