Improving safety in physical human-robot collaboration via deep metric
learning
- URL: http://arxiv.org/abs/2302.11933v2
- Date: Thu, 13 Apr 2023 10:40:27 GMT
- Title: Improving safety in physical human-robot collaboration via deep metric
learning
- Authors: Maryam Rezayati, Grammatiki Zanni, Ying Zaoshi, Davide Scaramuzza,
Hans Wernher van de Venn
- Abstract summary: Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
- Score: 36.28667896565093
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Direct physical interaction with robots is becoming increasingly important in
flexible production scenarios, but robots without protective fences also pose a
greater risk to the operator. In order to keep the risk potential low,
relatively simple measures are prescribed for operation, such as stopping the
robot if there is physical contact or if a safety distance is violated.
Although human injuries can be largely avoided in this way, all such solutions
have in common that real cooperation between humans and robots is hardly
possible and therefore the advantages of working with such systems cannot
develop its full potential. In human-robot collaboration scenarios, more
sophisticated solutions are required that make it possible to adapt the robot's
behavior to the operator and/or the current situation. Most importantly, during
free robot movement, physical contact must be allowed for meaningful
interaction and not recognized as a collision. However, here lies a key
challenge for future systems: detecting human contact by using robot
proprioception and machine learning algorithms. This work uses the Deep Metric
Learning (DML) approach to distinguish between non-contact robot movement,
intentional contact aimed at physical human-robot interaction, and collision
situations. The achieved results are promising and show show that DML achieves
98.6\% accuracy, which is 4\% higher than the existing standards (i.e. a deep
learning network trained without DML). It also indicates a promising
generalization capability for easy portability to other robots (target robots)
by detecting contact (distinguishing between contactless and intentional or
accidental contact) without having to retrain the model with target robot data.
Related papers
- InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions [7.574421886354134]
InteRACT architecture pre-trains a conditional intent prediction model on large human-human datasets and fine-tunes on a small human-robot dataset.
We evaluate on a set of real-world collaborative human-robot manipulation tasks and show that our conditional model improves over various marginal baselines.
arXiv Detail & Related papers (2023-11-21T19:15:17Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.