Facial Emotion Recognition: A multi-task approach using deep learning
- URL: http://arxiv.org/abs/2110.15028v1
- Date: Thu, 28 Oct 2021 11:23:00 GMT
- Title: Facial Emotion Recognition: A multi-task approach using deep learning
- Authors: Aakash Saroop, Pathik Ghugare, Sashank Mathamsetty, Vaibhav Vasani
- Abstract summary: We propose a multi-task learning algorithm, in which a single CNN detects gender, age and race of the subject along with their emotion.
Results show that this approach is significantly better than the current State of the art algorithms for this task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial Emotion Recognition is an inherently difficult problem, due to vast
differences in facial structures of individuals and ambiguity in the emotion
displayed by a person. Recently, a lot of work is being done in the field of
Facial Emotion Recognition, and the performance of the CNNs for this task has
been inferior compared to the results achieved by CNNs in other fields like
Object detection, Facial recognition etc. In this paper, we propose a
multi-task learning algorithm, in which a single CNN detects gender, age and
race of the subject along with their emotion. We validate this proposed
methodology using two datasets containing real-world images. The results show
that this approach is significantly better than the current State of the art
algorithms for this task.
Related papers
- Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Real-time Emotion and Gender Classification using Ensemble CNN [0.0]
This paper is the implementation of an Ensemble CNN for building a real-time system that can detect emotion and gender of the person.
Our work can predict emotion and gender on single face images as well as multiple face images.
arXiv Detail & Related papers (2021-11-15T13:51:35Z) - Towards a Real-Time Facial Analysis System [13.649384403827359]
We present a system-level design of a real-time facial analysis system.
With a collection of deep neural networks for object detection, classification, and regression, the system recognizes age, gender, facial expression, and facial similarity for each person that appears in the camera view.
Results on common off-the-shelf architecture show that the system's accuracy is comparable to the state-of-the-art methods, and the recognition speed satisfies real-time requirements.
arXiv Detail & Related papers (2021-09-21T18:27:15Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - Spontaneous Emotion Recognition from Facial Thermal Images [0.0]
We analyze that a large number of tasks for facial image processing in thermal infrared images can be addressed with modern learning-based approaches.
We have used USTC-NVIE database for training of a number of machine learning algorithms for facial landmark localization.
arXiv Detail & Related papers (2020-12-13T05:55:19Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.