Deep Convolutional Neural Network Based Facial Expression Recognition in
the Wild
- URL: http://arxiv.org/abs/2010.01301v1
- Date: Sat, 3 Oct 2020 08:17:00 GMT
- Title: Deep Convolutional Neural Network Based Facial Expression Recognition in
the Wild
- Authors: Hafiq Anas, Bacha Rehman, Wee Hong Ong
- Abstract summary: We have used a proposed deep convolutional neural network (CNN) model to perform automatic facial expression recognition (AFER) on the given dataset.
Our proposed model has achieved an accuracy of 50.77% and an F1 score of 29.16% on the validation set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes the proposed methodology, data used and the results of
our participation in the ChallengeTrack 2 (Expr Challenge Track) of the
Affective Behavior Analysis in-the-wild (ABAW) Competition 2020. In this
competition, we have used a proposed deep convolutional neural network (CNN)
model to perform automatic facial expression recognition (AFER) on the given
dataset. Our proposed model has achieved an accuracy of 50.77% and an F1 score
of 29.16% on the validation set.
Related papers
- Exploring Facial Expression Recognition through Semi-Supervised Pretraining and Temporal Modeling [8.809586885539002]
This paper presents our approach for the upcoming 6th Affective Behavior Analysis in-the-Wild (ABAW) competition.
In the 6th ABAW competition, our method achieved outstanding results on the official validation set.
arXiv Detail & Related papers (2024-03-18T16:36:54Z) - Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild [97.14064057840089]
We propose a mechanism to diversify the features extracted by CNN layers of state-of-the-art facial expression recognition architectures.
Experimental results on three well-known facial expression recognition in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of our method.
arXiv Detail & Related papers (2022-10-17T19:25:28Z) - Spatial and Temporal Networks for Facial Expression Recognition in the
Wild Videos [14.760435737320744]
The paper describes our proposed methodology for the seven basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2021.
Our ensemble model achieved F1 as 0.4133, accuracy as 0.6216 and final metric as 0.4821 on the validation set.
arXiv Detail & Related papers (2021-07-12T01:41:23Z) - Two-Stream Consensus Network: Submission to HACS Challenge 2021
Weakly-Supervised Learning Track [78.64815984927425]
The goal of weakly-supervised temporal action localization is to temporally locate and classify action of interest in untrimmed videos.
We adopt the two-stream consensus network (TSCN) as the main framework in this challenge.
Our solution ranked 2rd in this challenge, and we hope our method can serve as a baseline for future academic research.
arXiv Detail & Related papers (2021-06-21T03:36:36Z) - Technical Report for Valence-Arousal Estimation on Affwild2 Dataset [0.0]
We tackle the valence-arousal estimation challenge from ABAW FG-2020 Competition.
We use MIMAMO Net citedeng 2020mimamo model to achieve information about micro-motion and macro-motion.
arXiv Detail & Related papers (2021-05-04T14:00:07Z) - Facial expression and attributes recognition based on multi-task
learning of lightweight neural networks [9.162936410696409]
We examine the multi-task training of lightweight convolutional neural networks for face identification and classification of facial attributes.
It is shown that it is still necessary to fine-tune these networks in order to predict facial expressions.
Several models are presented based on MobileNet, EfficientNet and RexNet architectures.
arXiv Detail & Related papers (2021-03-31T14:21:04Z) - Expression Recognition Analysis in the Wild [9.878384185493623]
We report details and experimental results about a facial expression recognition method based on state-of-the-art methods.
We fine-tuned a SeNet deep learning architecture pre-trained on the well-known VGGFace2 dataset.
This paper is also required by the Affective Behavior Analysis in-the-wild (ABAW) competition in order to evaluate on the test set this approach.
arXiv Detail & Related papers (2021-01-22T17:28:31Z) - The FaceChannelS: Strike of the Sequences for the AffWild 2 Challenge [80.07590100872548]
We show how our little model can predict affective information from the facial expression on the novel AffWild2 dataset.
In this paper, we present one more chapter of benchmarking different versions of the FaceChannel neural network.
arXiv Detail & Related papers (2020-10-04T12:00:48Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - The FaceChannel: A Light-weight Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic FER are based on very deep neural networks that are difficult to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how the FaceChannel achieves a comparable, if not better, performance, as compared to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-04-17T12:03:14Z) - Suppressing Uncertainties for Large-Scale Facial Expression Recognition [81.51495681011404]
This paper proposes a simple yet efficient Self-Cure Network (SCN) which suppresses the uncertainties efficiently and prevents deep networks from over-fitting uncertain facial images.
Results on public benchmarks demonstrate that our SCN outperforms current state-of-the-art methods with textbf88.14% on RAF-DB, textbf60.23% on AffectNet, and textbf89.35% on FERPlus.
arXiv Detail & Related papers (2020-02-24T17:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.