Gesture2Path: Imitation Learning for Gesture-aware Navigation
- URL: http://arxiv.org/abs/2209.09375v1
- Date: Mon, 19 Sep 2022 23:05:36 GMT
- Title: Gesture2Path: Imitation Learning for Gesture-aware Navigation
- Authors: Catie Cuan, Edward Lee, Emre Fisher, Anthony Francis, Leila Takayama,
Tingnan Zhang, Alexander Toshev, and S\"oren Pirk
- Abstract summary: We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
- Score: 54.570943577423094
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As robots increasingly enter human-centered environments, they must not only
be able to navigate safely around humans, but also adhere to complex social
norms. Humans often rely on non-verbal communication through gestures and
facial expressions when navigating around other people, especially in densely
occupied spaces. Consequently, robots also need to be able to interpret
gestures as part of solving social navigation tasks. To this end, we present
Gesture2Path, a novel social navigation approach that combines image-based
imitation learning with model-predictive control. Gestures are interpreted
based on a neural network that operates on streams of images, while we use a
state-of-the-art model predictive control algorithm to solve point-to-point
navigation tasks. We deploy our method on real robots and showcase the
effectiveness of our approach for the four gestures-navigation scenarios:
left/right, follow me, and make a circle. Our experiments indicate that our
method is able to successfully interpret complex human gestures and to use them
as a signal to generate socially compliant trajectories for navigation tasks.
We validated our method based on in-situ ratings of participants interacting
with the robots.
Related papers
- CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction [19.997935470257794]
We present CANVAS, a framework that combines visual and linguistic instructions for commonsense-aware navigation.
Its success is driven by imitation learning, enabling the robot to learn from human navigation behavior.
Our experiments show that CANVAS outperforms the strong rule-based system ROS NavStack across all environments.
arXiv Detail & Related papers (2024-10-02T06:34:45Z) - Learning Strategies For Successful Crowd Navigation [0.0]
We focus on crowd navigation, using a neural network to learn specific strategies in-situ with a robot.
A CNN takes a top-down image of the scene as input and outputs the next action for the robot to take in terms of speed and angle.
arXiv Detail & Related papers (2024-04-09T18:25:21Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Learning Social Navigation from Demonstrations with Conditional Neural
Processes [2.627046865670577]
This paper presents a data-driven navigation architecture that uses Conditional Neural Processes to learn global and local controllers of the mobile robot from observations.
Our results demonstrate that the proposed framework can successfully carry out navigation tasks regarding social norms in the data.
arXiv Detail & Related papers (2022-10-07T14:37:56Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - Communicative Learning with Natural Gestures for Embodied Navigation
Agents with Human-in-the-Scene [34.1812210095966]
We develop a VR-based 3D simulation environment, named Ges-THOR, based on AI2-THOR platform.
In this virtual environment, a human player is placed in the same virtual scene and shepherds the artificial agent using only gestures.
We argue that learning the semantics of natural gestures is mutually beneficial to learning the navigation task--learn to communicate and communicate to learn.
arXiv Detail & Related papers (2021-08-05T20:56:47Z) - ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors [64.2809875343854]
We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
arXiv Detail & Related papers (2021-07-28T17:59:41Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for
Socially-Aware Robot Navigation [65.11858854040543]
We present ProxEmo, a novel end-to-end emotion prediction algorithm for robot navigation among pedestrians.
Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation.
arXiv Detail & Related papers (2020-03-02T17:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.