Deformable Convolutional LSTM for Human Body Emotion Recognition
- URL: http://arxiv.org/abs/2010.14607v1
- Date: Tue, 27 Oct 2020 21:01:09 GMT
- Title: Deformable Convolutional LSTM for Human Body Emotion Recognition
- Authors: Peyman Tahghighi, Abbas Koochari, Masoume Jalali
- Abstract summary: We do experiments on the GEM dataset and achieved state-of-the-art accuracy of 98.8% on the task of whole human body emotion recognition.
We incorporate the deformable behavior into the core of convolutional long short-term memory (ConvLSTM) to improve robustness to these deformations in the image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People represent their emotions in a myriad of ways. Among the most important
ones is whole body expressions which have many applications in different fields
such as human-computer interaction (HCI). One of the most important challenges
in human emotion recognition is that people express the same feeling in various
ways using their face and their body. Recently many methods have tried to
overcome these challenges using Deep Neural Networks (DNNs). However, most of
these methods were based on images or on facial expressions only and did not
consider deformation that may happen in the images such as scaling and rotation
which can adversely affect the recognition accuracy. In this work, motivated by
recent researches on deformable convolutions, we incorporate the deformable
behavior into the core of convolutional long short-term memory (ConvLSTM) to
improve robustness to these deformations in the image and, consequently,
improve its accuracy on the emotion recognition task from videos of arbitrary
length. We did experiments on the GEMEP dataset and achieved state-of-the-art
accuracy of 98.8% on the task of whole human body emotion recognition on the
validation set.
Related papers
- Extreme Image Transformations Affect Humans and Machines Differently [0.0]
Some recent artificial neural networks (ANNs) claim to model aspects of primate neural and human performance data.
We introduce a set of novel image transforms inspired by neurophysiological findings and evaluate humans and ANNs on an object recognition task.
We show that machines perform better than humans for certain transforms and struggle to perform at par with humans on others that are easy for humans.
arXiv Detail & Related papers (2022-11-30T18:12:53Z) - EMOCA: Emotion Driven Monocular Face Capture and Animation [59.15004328155593]
We introduce a novel deep perceptual emotion consistency loss during training, which helps ensure that the reconstructed 3D expression matches the expression depicted in the input image.
On the task of in-the-wild emotion recognition, our purely geometric approach is on par with the best image-based methods, highlighting the value of 3D geometry in analyzing human behavior.
arXiv Detail & Related papers (2022-04-24T15:58:35Z) - Affect-DML: Context-Aware One-Shot Recognition of Human Affect using
Deep Metric Learning [29.262204241732565]
Existing methods assume that all emotions-of-interest are given a priori as annotated training examples.
We conceptualize one-shot recognition of emotions in context -- a new problem aimed at recognizing human affect states in finer particle level from a single support sample.
All variants of our model clearly outperform the random baseline, while leveraging the semantic scene context consistently improves the learnt representations.
arXiv Detail & Related papers (2021-11-30T10:35:20Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Preserving Privacy in Human-Motion Affect Recognition [4.753703852165805]
This work evaluates the effectiveness of existing methods at recognising emotions using both 3D temporal joint signals and manually extracted features.
We propose a cross-subject transfer learning technique for training a multi-encoder autoencoder deep neural network to learn disentangled latent representations of human motion features.
arXiv Detail & Related papers (2021-05-09T15:26:21Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data [4.617405932149653]
We analyze 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition.
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition.
On the other hand, while action units can positively impact emotion recognition when fused with other modalities, the results suggest it is difficult to detect emotion using them in a unimodal fashion.
arXiv Detail & Related papers (2020-05-17T18:59:57Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.