Emotion Recognition From Gait Analyses: Current Research and Future
Directions
- URL: http://arxiv.org/abs/2003.11461v4
- Date: Sat, 16 Jul 2022 02:18:32 GMT
- Title: Emotion Recognition From Gait Analyses: Current Research and Future
Directions
- Authors: Shihao Xu, Jing Fang, Xiping Hu, Edith Ngai, Wei Wang, Yi Guo, Victor
C.M. Leung
- Abstract summary: gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
- Score: 48.93172413752614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human gait refers to a daily motion that represents not only mobility, but it
can also be used to identify the walker by either human observers or computers.
Recent studies reveal that gait even conveys information about the walker's
emotion. Individuals in different emotion states may show different gait
patterns. The mapping between various emotions and gait patterns provides a new
source for automated emotion recognition. Compared to traditional emotion
detection biometrics, such as facial expression, speech and physiological
parameters, gait is remotely observable, more difficult to imitate, and
requires less cooperation from the subject. These advantages make gait a
promising source for emotion detection. This article reviews current research
on gait-based emotion detection, particularly on how gait parameters can be
affected by different emotion states and how the emotion states can be
recognized through distinct gait patterns. We focus on the detailed methods and
techniques applied in the whole process of emotion recognition: data
collection, preprocessing, and classification. At last, we discuss possible
future developments of efficient and effective gait-based emotion recognition
using the state of the art techniques on intelligent computation and big data.
Related papers
- Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Emotion Recognition for Healthcare Surveillance Systems Using Neural
Networks: A Survey [8.31246680772592]
We present recent research in the field of using neural networks to recognize emotions.
We focus on studying emotions' recognition from speech, facial expressions, and audio-visual input.
These three emotion recognition techniques can be used as a surveillance system in healthcare centers to monitor patients.
arXiv Detail & Related papers (2021-07-13T11:17:00Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Temporal aggregation of audio-visual modalities for emotion recognition [0.5352699766206808]
We propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality.
Our proposed method outperforms other methods from the literature and human accuracy rating.
arXiv Detail & Related papers (2020-07-08T18:44:15Z) - Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data [4.617405932149653]
We analyze 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition.
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition.
On the other hand, while action units can positively impact emotion recognition when fused with other modalities, the results suggest it is difficult to detect emotion using them in a unimodal fashion.
arXiv Detail & Related papers (2020-05-17T18:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.