Wearable camera-based human absolute localization in large warehouses
- URL: http://arxiv.org/abs/2007.10066v1
- Date: Mon, 20 Jul 2020 12:57:37 GMT
- Title: Wearable camera-based human absolute localization in large warehouses
- Authors: Ga\"el \'Ecorchard and Karel Ko\v{s}nar and Libor P\v{r}eu\v{c}il
- Abstract summary: This paper introduces a wearable human localization system for large warehouses.
A monocular down-looking camera is detecting ground nodes, identifying them and computing the absolute position of the human.
A virtual safety area around the human operator is set up and any AGV in this area is immediately stopped.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a robotised warehouse, as in any place where robots move autonomously, a
major issue is the localization or detection of human operators during their
intervention in the work area of the robots. This paper introduces a wearable
human localization system for large warehouses, which utilize preinstalled
infrastructure used for localization of automated guided vehicles (AGVs). A
monocular down-looking camera is detecting ground nodes, identifying them and
computing the absolute position of the human to allow safe cooperation and
coexistence of humans and AGVs in the same workspace. A virtual safety area
around the human operator is set up and any AGV in this area is immediately
stopped. In order to avoid triggering an emergency stop because of the short
distance between robots and human operators, the trajectories of the robots
have to be modified so that they do not interfere with the human. The purpose
of this paper is to demonstrate an absolute visual localization method working
in the challenging environment of an automated warehouse with low intensity of
light, massively changing environment and using solely monocular camera placed
on the human body.
Related papers
- Near Real-Time Position Tracking for Robot-Guided Evacuation [0.0]
This paper introduces a near real-time human position tracking solution tailored for evacuation robots.
We show that the system can achieve an accuracy of 0.55 meters when compared to ground truth.
The potential of our approach extends beyond mere tracking, paving the way for evacuee motion prediction.
arXiv Detail & Related papers (2023-09-26T16:34:18Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - Vision-Based Safety System for Barrierless Human-Robot Collaboration [0.0]
This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation.
A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot.
Three different operation modes in which the human and robot interact are presented.
arXiv Detail & Related papers (2022-08-03T12:31:03Z) - Interaction Replica: Tracking Human-Object Interaction and Scene Changes From Human Motion [48.982957332374866]
Modeling changes caused by humans is essential for building digital twins.
Our method combines visual localization of humans in the scene with contact-based reasoning about human-scene interactions from IMU data.
Our code, data and model are available on our project page at http://virtualhumans.mpi-inf.mpg.de/ireplica/.
arXiv Detail & Related papers (2022-05-05T17:58:06Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Tethered Aerial Visual Assistance [5.237054164442403]
An autonomous tethered Unmanned Aerial Vehicle (UAV) is developed into a visual assistant in a marsupial co-robots team.
Using a fundamental viewpoint quality theory, a formal risk reasoning framework, and a newly developed tethered motion suite, our visual assistant is able to autonomously navigate to good-quality viewpoints.
The developed marsupial co-robots team could improve tele-operation efficiency in nuclear operations, bomb squad, disaster robots, and other domains.
arXiv Detail & Related papers (2020-01-15T06:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.