ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for
Socially-Aware Robot Navigation
- URL: http://arxiv.org/abs/2003.01062v2
- Date: Tue, 28 Jul 2020 15:38:09 GMT
- Title: ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for
Socially-Aware Robot Navigation
- Authors: Venkatraman Narayanan, Bala Murali Manoghar, Vishnu Sashank Dorbala,
Dinesh Manocha, Aniket Bera
- Abstract summary: We present ProxEmo, a novel end-to-end emotion prediction algorithm for robot navigation among pedestrians.
Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation.
- Score: 65.11858854040543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ProxEmo, a novel end-to-end emotion prediction algorithm for
socially aware robot navigation among pedestrians. Our approach predicts the
perceived emotions of a pedestrian from walking gaits, which is then used for
emotion-guided navigation taking into account social and proxemic constraints.
To classify emotions, we propose a multi-view skeleton graph convolution-based
model that works on a commodity camera mounted onto a moving robot. Our emotion
recognition is integrated into a mapless navigation scheme and makes no
assumptions about the environment of pedestrian motion. It achieves a mean
average emotion prediction precision of 82.47% on the Emotion-Gait benchmark
dataset. We outperform current state-of-art algorithms for emotion recognition
from 3D gaits. We highlight its benefits in terms of navigation in indoor
scenes using a Clearpath Jackal robot.
Related papers
- HappyRouting: Learning Emotion-Aware Route Trajectories for Scalable
In-The-Wild Navigation [24.896210787867368]
We present HappyRouting, a novel navigation-based empathic car interface guiding drivers through real-world traffic while evoking positive emotions.
Our contribution is a machine learning-based generated emotion map layer, predicting emotions along routes based on static and dynamic contextual data.
We discuss how emotion-based routing can be integrated into navigation apps, promoting emotional well-being for mobility use.
arXiv Detail & Related papers (2024-01-28T16:44:17Z) - MoEmo Vision Transformer: Integrating Cross-Attention and Movement
Vectors in 3D Pose Estimation for HRI Emotion Detection [4.757210144179483]
We introduce MoEmo (Motion to Emotion), a cross-attention vision transformer (ViT) for human emotion detection within robotics systems.
We implement a cross-attention fusion model to combine movement vectors and environment contexts into a joint representation to derive emotion estimation.
We train the MoEmo system to jointly analyze motion and context, yielding emotion detection that outperforms the current state-of-the-art.
arXiv Detail & Related papers (2023-10-15T06:52:15Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - e-Inu: Simulating A Quadruped Robot With Emotional Sentience [4.15623340386296]
This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions.
We use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions.
The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%.
arXiv Detail & Related papers (2023-01-03T06:28:45Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping [55.72376663488104]
We present an autoencoder-based approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data.
Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in the encoder.
We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings.
arXiv Detail & Related papers (2019-11-20T05:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.