FollowMe: a Robust Person Following Framework Based on Re-Identification
and Gestures
- URL: http://arxiv.org/abs/2311.12992v1
- Date: Tue, 21 Nov 2023 20:59:27 GMT
- Title: FollowMe: a Robust Person Following Framework Based on Re-Identification
and Gestures
- Authors: Federico Rollo, Andrea Zunino, Gennaro Raiola, Fabio Amadio, Arash
Ajoudani and Nikolaos Tsagarakis
- Abstract summary: Human-robot interaction (HRI) has become a crucial enabler in houses and industries for facilitating operational flexibility.
We developed a unified perception and navigation framework, which enables the robot to identify and follow a target person.
The Re-ID module can autonomously learn the features of a target person and use the acquired knowledge to visually re-identify the target.
- Score: 12.850149165791551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-robot interaction (HRI) has become a crucial enabler in houses and
industries for facilitating operational flexibility. When it comes to mobile
collaborative robots, this flexibility can be further increased due to the
autonomous mobility and navigation capacity of the robotic agents, expanding
their workspace and consequently, the personalizable assistance they can
provide to the human operators. This however requires that the robot is capable
of detecting and identifying the human counterpart in all stages of the
collaborative task, and in particular while following a human in crowded
workplaces. To respond to this need, we developed a unified perception and
navigation framework, which enables the robot to identify and follow a target
person using a combination of visual Re-Identification (Re-ID), hand gestures
detection, and collision-free navigation. The Re-ID module can autonomously
learn the features of a target person and use the acquired knowledge to
visually re-identify the target. The navigation stack is used to follow the
target avoiding obstacles and other individuals in the environment. Experiments
are conducted with few subjects in a laboratory setting where some unknown
dynamic obstacles are introduced.
Related papers
- CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - CARPE-ID: Continuously Adaptable Re-identification for Personalized
Robot Assistance [16.948256303861022]
In today's Human-Robot Interaction (HRI) scenarios, a prevailing tendency exists to assume that the robot shall cooperate with the closest individual.
We propose a person re-identification module based on continual visual adaptation techniques.
We test the framework singularly using recorded videos in a laboratory environment and an HRI scenario by a mobile robot.
arXiv Detail & Related papers (2023-10-30T10:24:21Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Learning to Autonomously Reach Objects with NICO and Grow-When-Required
Networks [12.106301681662655]
A developmental robotics approach is used to learn visuomotor coordination on the NICO platform for the task of object reaching.
Multiple Grow-When-Required (GWR) networks are used to learn increasingly more complex motoric behaviors.
We show that the humanoid robot NICO is able to reach objects with a 76% success rate.
arXiv Detail & Related papers (2022-10-14T14:23:57Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Guided Navigation from Multiple Viewpoints using Qualitative Spatial
Reasoning [0.0]
This work aims to develop algorithms capable of guiding a sensory deprived robot to a goal location.
The main task considered in this work is, given a group of autonomous agents, the development and evaluation of algorithms capable of producing a set of high-level commands.
arXiv Detail & Related papers (2020-11-03T00:34:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.