Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor
- URL: http://arxiv.org/abs/2001.11976v1
- Date: Fri, 31 Jan 2020 17:47:16 GMT
- Title: Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor
- Authors: Sevegni Odilon Clement Allognon, Alessandro L. Koerich, Alceu de S.
Britto Jr
- Abstract summary: It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
- Score: 70.2226417364135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic facial expression recognition is an important research area in the
emotion recognition and computer vision. Applications can be found in several
domains such as medical treatment, driver fatigue surveillance, sociable
robotics, and several other human-computer interaction systems. Therefore, it
is crucial that the machine should be able to recognize the emotional state of
the user with high accuracy. In recent years, deep neural networks have been
used with great success in recognizing emotions. In this paper, we present a
new model for continuous emotion recognition based on facial expression
recognition by using an unsupervised learning approach based on transfer
learning and autoencoders. The proposed approach also includes preprocessing
and post-processing techniques which contribute favorably to improving the
performance of predicting the concordance correlation coefficient for arousal
and valence dimensions. Experimental results for predicting spontaneous and
natural emotions on the RECOLA 2016 dataset have shown that the proposed
approach based on visual information can achieve CCCs of 0.516 and 0.264 for
valence and arousal, respectively.
Related papers
- CAST-Phys: Contactless Affective States Through Physiological signals Database [74.28082880875368]
The lack of affective multi-modal datasets remains a major bottleneck in developing accurate emotion recognition systems.<n>We present the Contactless Affective States Through Physiological Signals Database (CAST-Phys), a novel high-quality dataset capable of remote physiological emotion recognition.<n>Our analysis highlights the crucial role of physiological signals in realistic scenarios where facial expressions alone may not provide sufficient emotional information.
arXiv Detail & Related papers (2025-07-08T15:20:24Z) - Emotion Detection on User Front-Facing App Interfaces for Enhanced Schedule Optimization: A Machine Learning Approach [0.0]
We present and evaluate two complementary approaches to emotion detection.<n>A biometric-based method utilizing heart rate (HR) data extracted from electrocardiogram (ECG) signals to predict the emotional dimensions of Valence, Arousal, and Dominance; and a behavioral method analyzing computer activity through multiple machine learning models to classify emotions based on fine-grained user interactions such as mouse movements, clicks, and keystroke patterns.<n>Our comparative analysis, from real-world datasets, reveals that while both approaches demonstrate effectiveness, the computer activity-based method delivers superior consistency and accuracy, particularly for mouse-related interactions, which achieved approximately
arXiv Detail & Related papers (2025-06-24T03:21:46Z) - Hybrid Quantum Deep Learning Model for Emotion Detection using raw EEG Signal Analysis [0.0]
This work presents a hybrid quantum deep learning technique for emotion recognition.
Conventional EEG-based emotion recognition techniques are limited by noise and high-dimensional data complexity.
The model will be extended for real-time applications and multi-class categorization in future study.
arXiv Detail & Related papers (2024-11-19T17:44:04Z) - Emotion Detection through Body Gesture and Face [0.0]
The project addresses the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures.
Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language.
The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way.
arXiv Detail & Related papers (2024-07-13T15:15:50Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - Emotion Recognition from the perspective of Activity Recognition [0.0]
Appraising human emotional states, behaviors, and reactions displayed in real-world settings can be accomplished using latent continuous dimensions.
For emotion recognition systems to be deployed and integrated into real-world mobile and computing devices, we need to consider data collected in the world.
We propose a novel three-stream end-to-end deep learning regression pipeline with an attention mechanism.
arXiv Detail & Related papers (2024-03-24T18:53:57Z) - Leveraging Previous Facial Action Units Knowledge for Emotion
Recognition on Faces [2.4158349218144393]
We propose the usage of Facial Action Units (AUs) recognition techniques to recognize emotions.
This recognition will be based on the Facial Action Coding System (FACS) and computed by a machine learning system.
arXiv Detail & Related papers (2023-11-20T18:14:53Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.