Vision-Based Safety System for Barrierless Human-Robot Collaboration
- URL: http://arxiv.org/abs/2208.02010v1
- Date: Wed, 3 Aug 2022 12:31:03 GMT
- Title: Vision-Based Safety System for Barrierless Human-Robot Collaboration
- Authors: Lina Mar\'ia Amaya-Mej\'ia, Nicol\'as Duque-Su\'arez, Daniel
Jaramillo-Ram\'irez, Carol Martinez
- Abstract summary: This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation.
A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot.
Three different operation modes in which the human and robot interact are presented.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human safety has always been the main priority when working near an
industrial robot. With the rise of Human-Robot Collaborative environments,
physical barriers to avoiding collisions have been disappearing, increasing the
risk of accidents and the need for solutions that ensure a safe Human-Robot
Collaboration. This paper proposes a safety system that implements Speed and
Separation Monitoring (SSM) type of operation. For this, safety zones are
defined in the robot's workspace following current standards for industrial
collaborative robots. A deep learning-based computer vision system detects,
tracks, and estimates the 3D position of operators close to the robot. The
robot control system receives the operator's 3D position and generates 3D
representations of them in a simulation environment. Depending on the zone
where the closest operator was detected, the robot stops or changes its
operating speed. Three different operation modes in which the human and robot
interact are presented. Results show that the vision-based system can correctly
detect and classify in which safety zone an operator is located and that the
different proposed operation modes ensure that the robot's reaction and stop
time are within the required time limits to guarantee safety.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z) - Motion Prediction with Gaussian Processes for Safe Human-Robot Interaction in Virtual Environments [1.677718351174347]
Collaborative robots must be safe to operate alongside humans to minimize the risk of accidental collisions.
This research aims to improve the efficiency of a collaborative robot while improving the safety of the human user.
arXiv Detail & Related papers (2024-05-15T05:51:41Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Perceiving Humans: from Monocular 3D Localization to Social Distancing [93.03056743850141]
We present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image.
We show that it is possible to rethink the concept of "social distancing" as a form of social interaction in contrast to a simple location-based rule.
arXiv Detail & Related papers (2020-09-01T10:12:30Z) - Wearable camera-based human absolute localization in large warehouses [0.0]
This paper introduces a wearable human localization system for large warehouses.
A monocular down-looking camera is detecting ground nodes, identifying them and computing the absolute position of the human.
A virtual safety area around the human operator is set up and any AGV in this area is immediately stopped.
arXiv Detail & Related papers (2020-07-20T12:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.