The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition
- URL: http://arxiv.org/abs/2009.07635v1
- Date: Tue, 15 Sep 2020 09:25:37 GMT
- Title: The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition
- Authors: Pablo Barros, Nikhil Churamani and Alessandra Sciutti
- Abstract summary: Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
- Score: 71.24825724518847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current state-of-the-art models for automatic Facial Expression Recognition
(FER) are based on very deep neural networks that are effective but rather
expensive to train. Given the dynamic conditions of FER, this characteristic
hinders such models of been used as a general affect recognition. In this
paper, we address this problem by formalizing the FaceChannel, a light-weight
neural network that has much fewer parameters than common deep neural networks.
We introduce an inhibitory layer that helps to shape the learning of facial
features in the last layer of the network and thus improving performance while
reducing the number of trainable parameters. To evaluate our model, we perform
a series of experiments on different benchmark datasets and demonstrate how the
FaceChannel achieves a comparable, if not better, performance to the current
state-of-the-art in FER. Our experiments include cross-dataset analysis, to
estimate how our model behaves on different affective recognition conditions.
We conclude our paper with an analysis of how FaceChannel learns and adapt the
learned facial features towards the different datasets.
Related papers
- Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Exploiting Emotional Dependencies with Graph Convolutional Networks for
Facial Expression Recognition [31.40575057347465]
This paper proposes a novel multi-task learning framework to recognize facial expressions in-the-wild.
A shared feature representation is learned for both discrete and continuous recognition in a MTL setting.
The results of our experiments show that our method outperforms the current state-of-the-art methods on discrete FER.
arXiv Detail & Related papers (2021-06-07T10:20:05Z) - Facial Emotion Recognition: State of the Art Performance on FER2013 [0.0]
We achieve the highest single-network classification accuracy on the FER2013 dataset.
Our model achieves state-of-the-art single-network accuracy of 73.28 % on FER2013 without using extra training data.
arXiv Detail & Related papers (2021-05-08T04:20:53Z) - Facial expression and attributes recognition based on multi-task
learning of lightweight neural networks [9.162936410696409]
We examine the multi-task training of lightweight convolutional neural networks for face identification and classification of facial attributes.
It is shown that it is still necessary to fine-tune these networks in order to predict facial expressions.
Several models are presented based on MobileNet, EfficientNet and RexNet architectures.
arXiv Detail & Related papers (2021-03-31T14:21:04Z) - Improving DeepFake Detection Using Dynamic Face Augmentation [0.8793721044482612]
Most publicly available DeepFake detection datasets have limited variations.
Deep neural networks tend to overfit to the facial features instead of learning to detect manipulation features of DeepFake content.
We introduce Face-Cutout, a data augmentation method for training Convolutional Neural Networks (CNN) to improve DeepFake detection.
arXiv Detail & Related papers (2021-02-18T20:25:45Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Video-based Facial Expression Recognition using Graph Convolutional
Networks [57.980827038988735]
We introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based facial expression recognition.
We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0.
arXiv Detail & Related papers (2020-10-26T07:31:51Z) - The FaceChannelS: Strike of the Sequences for the AffWild 2 Challenge [80.07590100872548]
We show how our little model can predict affective information from the facial expression on the novel AffWild2 dataset.
In this paper, we present one more chapter of benchmarking different versions of the FaceChannel neural network.
arXiv Detail & Related papers (2020-10-04T12:00:48Z) - The FaceChannel: A Light-weight Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic FER are based on very deep neural networks that are difficult to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how the FaceChannel achieves a comparable, if not better, performance, as compared to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-04-17T12:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.