Facial Emotion Characterization and Detection using Fourier Transform
and Machine Learning
- URL: http://arxiv.org/abs/2112.02729v1
- Date: Mon, 6 Dec 2021 01:41:15 GMT
- Title: Facial Emotion Characterization and Detection using Fourier Transform
and Machine Learning
- Authors: Aishwarya Gouru, Shan Suthaharan
- Abstract summary: We present a Fourier-based machine learning technique that characterizes and detects facial emotions.
We test the hypothesis using the performance scores of the random forest (RF) and the artificial neural network (ANN)
Our finding is that the computational emotional frequencies discovered by the proposed approach provides meaningful emotional features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a Fourier-based machine learning technique that characterizes and
detects facial emotions. The main challenging task in the development of
machine learning (ML) models for classifying facial emotions is the detection
of accurate emotional features from a set of training samples, and the
generation of feature vectors for constructing a meaningful feature space and
building ML models. In this paper, we hypothesis that the emotional features
are hidden in the frequency domain; hence, they can be captured by leveraging
the frequency domain and masking techniques. We also make use of the conjecture
that a facial emotions are convoluted with the normal facial features and the
other emotional features; however, they carry linearly separable spatial
frequencies (we call computational emotional frequencies). Hence, we propose a
technique by leveraging fast Fourier transform (FFT) and rectangular
narrow-band frequency kernels, and the widely used Yale-Faces image dataset. We
test the hypothesis using the performance scores of the random forest (RF) and
the artificial neural network (ANN) classifiers as the measures to validate the
effectiveness of the captured emotional frequencies. Our finding is that the
computational emotional frequencies discovered by the proposed approach
provides meaningful emotional features that help RF and ANN achieve a high
precision scores above 93%, on average.
Related papers
- Leveraging Previous Facial Action Units Knowledge for Emotion
Recognition on Faces [2.4158349218144393]
We propose the usage of Facial Action Units (AUs) recognition techniques to recognize emotions.
This recognition will be based on the Facial Action Coding System (FACS) and computed by a machine learning system.
arXiv Detail & Related papers (2023-11-20T18:14:53Z) - Unveiling Emotions from EEG: A GRU-Based Approach [2.580765958706854]
Gated Recurrent Unit (GRU) algorithm is tested to see if it can use EEG signals to predict emotional states.
Our publicly accessible dataset consists of resting neutral data as well as EEG recordings from people who were exposed to stimuli evoking happy, neutral, and negative emotions.
arXiv Detail & Related papers (2023-07-20T11:04:46Z) - Emotion Analysis on EEG Signal Using Machine Learning and Neural Network [0.0]
The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals.
Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals.
arXiv Detail & Related papers (2023-07-09T09:50:34Z) - Multi-Domain Norm-referenced Encoding Enables Data Efficient Transfer
Learning of Facial Expression Recognition [62.997667081978825]
We propose a biologically-inspired mechanism for transfer learning in facial expression recognition.
Our proposed architecture provides an explanation for how the human brain might innately recognize facial expressions on varying head shapes.
Our model achieves a classification accuracy of 92.15% on the FERG dataset with extreme data efficiency.
arXiv Detail & Related papers (2023-04-05T09:06:30Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Human Expression Recognition using Facial Shape Based Fourier
Descriptors Fusion [15.063379178217717]
This paper aims to produce a new facial expression recognition method based on the changes in the facial muscles.
The geometric features are used to specify the facial regions i.e., mouth, eyes, and nose.
A multi-class support vector machine is applied for classification of seven human expression.
arXiv Detail & Related papers (2020-12-28T05:01:44Z) - An End-to-End Visual-Audio Attention Network for Emotion Recognition in
User-Generated Videos [64.91614454412257]
We propose to recognize video emotions in an end-to-end manner based on convolutional neural networks (CNNs)
Specifically, we develop a deep Visual-Audio Attention Network (VAANet), a novel architecture that integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN.
arXiv Detail & Related papers (2020-02-12T15:33:59Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.