Hybrid Facial Expression Recognition (FER2013) Model for Real-Time
Emotion Classification and Prediction
- URL: http://arxiv.org/abs/2206.09509v1
- Date: Sun, 19 Jun 2022 23:43:41 GMT
- Title: Hybrid Facial Expression Recognition (FER2013) Model for Real-Time
Emotion Classification and Prediction
- Authors: Ozioma Collins Oguine (1), Kaleab Alamayehu Kinfu (2), Kanyifeechukwu
Jane Oguine (1), Hashim Ibrahim Bisallah (1), Daniel Ofuani (1) ((1)
Department of Computer Science, University of Abuja, Nigeria, (2) Department
of Computer Science, Johns Hopkins University, Baltimore, USA)
- Abstract summary: This paper proposes a hybrid model for Facial Expression recognition, which comprises a Deep Convolutional Neural Network (DCNN) and Haar Cascade deep learning architectures.
The objective is to classify real-time and digital facial images into one of the seven facial emotion categories considered.
The experimental results show a significantly improved classification performance compared to state-of-the-art experiments and research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial Expression Recognition is a vital research topic in most fields
ranging from artificial intelligence and gaming to Human-Computer Interaction
(HCI) and Psychology. This paper proposes a hybrid model for Facial Expression
recognition, which comprises a Deep Convolutional Neural Network (DCNN) and
Haar Cascade deep learning architectures. The objective is to classify
real-time and digital facial images into one of the seven facial emotion
categories considered. The DCNN employed in this research has more
convolutional layers, ReLU Activation functions, and multiple kernels to
enhance filtering depth and facial feature extraction. In addition, a haar
cascade model was also mutually used to detect facial features in real-time
images and video frames. Grayscale images from the Kaggle repository (FER-2013)
and then exploited Graphics Processing Unit (GPU) computation to expedite the
training and validation process. Pre-processing and data augmentation
techniques are applied to improve training efficiency and classification
performance. The experimental results show a significantly improved
classification performance compared to state-of-the-art (SoTA) experiments and
research. Also, compared to other conventional models, this paper validates
that the proposed architecture is superior in classification performance with
an improvement of up to 6%, totaling up to 70% accuracy, and with less
execution time of 2098.8s.
Related papers
- Exploring a Multimodal Fusion-based Deep Learning Network for Detecting Facial Palsy [3.2381492754749632]
We present a multimodal fusion-based deep learning model that utilizes unstructured data and structured data to detect facial palsy.
Our model slightly improved the precision score to 77.05 at the expense of a decrease in the recall score.
arXiv Detail & Related papers (2024-05-26T09:16:34Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - Enhancing Facial Classification and Recognition using 3D Facial Models
and Deep Learning [0.30693357740321775]
We integrate 3D facial models with deep learning methods to improve classification accuracy.
Our approach achieves notable results: 100% individual classification, 95.4% gender classification, and 83.5% expression classification accuracy.
arXiv Detail & Related papers (2023-12-08T18:09:29Z) - Multi-Domain Norm-referenced Encoding Enables Data Efficient Transfer
Learning of Facial Expression Recognition [62.997667081978825]
We propose a biologically-inspired mechanism for transfer learning in facial expression recognition.
Our proposed architecture provides an explanation for how the human brain might innately recognize facial expressions on varying head shapes.
Our model achieves a classification accuracy of 92.15% on the FERG dataset with extreme data efficiency.
arXiv Detail & Related papers (2023-04-05T09:06:30Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - A Novel Enhanced Convolution Neural Network with Extreme Learning
Machine: Facial Emotional Recognition in Psychology Practices [31.159346405039667]
This research aims to improve facial emotion recognition accuracy during the training session and reduce processing time.
The proposed CNNEELM model is trained with JAFFE, CK+, and FER2013 expression datasets.
The simulation results show significant improvements in accuracy and processing time, making the model suitable for the video analysis process.
arXiv Detail & Related papers (2022-08-05T02:21:34Z) - Facial Emotion Recognition: State of the Art Performance on FER2013 [0.0]
We achieve the highest single-network classification accuracy on the FER2013 dataset.
Our model achieves state-of-the-art single-network accuracy of 73.28 % on FER2013 without using extra training data.
arXiv Detail & Related papers (2021-05-08T04:20:53Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.