A Novel Enhanced Convolution Neural Network with Extreme Learning
Machine: Facial Emotional Recognition in Psychology Practices
- URL: http://arxiv.org/abs/2208.02953v1
- Date: Fri, 5 Aug 2022 02:21:34 GMT
- Title: A Novel Enhanced Convolution Neural Network with Extreme Learning
Machine: Facial Emotional Recognition in Psychology Practices
- Authors: Nitesh Banskota, Abeer Alsadoon, P.W.C. Prasad, Ahmed Dawoud, Tarik A.
Rashid, Omar Hisham Alsadoon
- Abstract summary: This research aims to improve facial emotion recognition accuracy during the training session and reduce processing time.
The proposed CNNEELM model is trained with JAFFE, CK+, and FER2013 expression datasets.
The simulation results show significant improvements in accuracy and processing time, making the model suitable for the video analysis process.
- Score: 31.159346405039667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial emotional recognition is one of the essential tools used by
recognition psychology to diagnose patients. Face and facial emotional
recognition are areas where machine learning is excelling. Facial Emotion
Recognition in an unconstrained environment is an open challenge for digital
image processing due to different environments, such as lighting conditions,
pose variation, yaw motion, and occlusions. Deep learning approaches have shown
significant improvements in image recognition. However, accuracy and time still
need improvements. This research aims to improve facial emotion recognition
accuracy during the training session and reduce processing time using a
modified Convolution Neural Network Enhanced with Extreme Learning Machine
(CNNEELM). The system entails (CNNEELM) improving the accuracy in image
registration during the training session. Furthermore, the system recognizes
six facial emotions happy, sad, disgust, fear, surprise, and neutral with the
proposed CNNEELM model. The study shows that the overall facial emotion
recognition accuracy is improved by 2% than the state of art solutions with a
modified Stochastic Gradient Descent (SGD) technique. With the Extreme Learning
Machine (ELM) classifier, the processing time is brought down to 65ms from
113ms, which can smoothly classify each frame from a video clip at 20fps. With
the pre-trained InceptionV3 model, the proposed CNNEELM model is trained with
JAFFE, CK+, and FER2013 expression datasets. The simulation results show
significant improvements in accuracy and processing time, making the model
suitable for the video analysis process. Besides, the study solves the issue of
the large processing time required to process the facial images.
Related papers
- Leaving Some Facial Features Behind [0.0]
This study examines how specific facial features influence emotion classification, using facial perturbations on the Fer2013 dataset.
Models trained on data with the removal of some important facial feature experienced up to an 85% accuracy drop when compared to baseline for emotions like happy and surprise.
arXiv Detail & Related papers (2024-10-29T02:28:53Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Hybrid Facial Expression Recognition (FER2013) Model for Real-Time
Emotion Classification and Prediction [0.0]
This paper proposes a hybrid model for Facial Expression recognition, which comprises a Deep Convolutional Neural Network (DCNN) and Haar Cascade deep learning architectures.
The objective is to classify real-time and digital facial images into one of the seven facial emotion categories considered.
The experimental results show a significantly improved classification performance compared to state-of-the-art experiments and research.
arXiv Detail & Related papers (2022-06-19T23:43:41Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Real-time Facial Expression Recognition "In The Wild'' by Disentangling
3D Expression from Identity [6.974241731162878]
This paper proposes a novel method for human emotion recognition from a single RGB image.
We construct a large-scale dataset of facial videos, rich in facial dynamics, identities, expressions, appearance and 3D pose variations.
Our proposed framework runs at 50 frames per second and is capable of robustly estimating parameters of 3D expression variation.
arXiv Detail & Related papers (2020-05-12T01:32:55Z) - TimeConvNets: A Deep Time Windowed Convolution Neural Network Design for
Real-time Video Facial Expression Recognition [93.0013343535411]
This study explores a novel deep time windowed convolutional neural network design (TimeConvNets) for the purpose of real-time video facial expression recognition.
We show that TimeConvNets can better capture the transient nuances of facial expressions and boost classification accuracy while maintaining a low inference time.
arXiv Detail & Related papers (2020-03-03T20:58:52Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.