Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers
- URL: http://arxiv.org/abs/2301.10906v7
- Date: Sat, 29 Apr 2023 01:02:43 GMT
- Title: Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers
- Authors: Arpita Vats, Aman Chadha
- Abstract summary: We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The ability to recognize and interpret facial emotions is a critical
component of human communication, as it allows individuals to understand and
respond to emotions conveyed through facial expressions and vocal tones. The
recognition of facial emotions is a complex cognitive process that involves the
integration of visual and auditory information, as well as prior knowledge and
social cues. It plays a crucial role in social interaction, affective
processing, and empathy, and is an important aspect of many real-world
applications, including human-computer interaction, virtual assistants, and
mental health diagnosis and treatment. The development of accurate and
efficient models for facial emotion recognition is therefore of great
importance and has the potential to have a significant impact on various fields
of study.The field of Facial Emotion Recognition (FER) is of great significance
in the areas of computer vision and artificial intelligence, with vast
commercial and academic potential in fields such as security, advertising, and
entertainment. We propose a FER framework that employs Swin Vision Transformers
(SwinT) and squeeze and excitation block (SE) to address vision tasks. The
approach uses a transformer model with an attention mechanism, SE, and SAM to
improve the efficiency of the model, as transformers often require a large
amount of data. Our focus was to create an efficient FER model based on SwinT
architecture that can recognize facial emotions using minimal data. We trained
our model on a hybrid dataset and evaluated its performance on the AffectNet
dataset, achieving an F1-score of 0.5420, which surpassed the winner of the
Affective Behavior Analysis in the Wild (ABAW) Competition held at the European
Conference on Computer Vision (ECCV) 2022~\cite{Kollias}.
Related papers
- Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors [63.194053817609024]
We introduce eye behaviors as an important emotional cues for the creation of a new Eye-behavior-aided Multimodal Emotion Recognition dataset.
For the first time, we provide annotations for both Emotion Recognition (ER) and Facial Expression Recognition (FER) in the EMER dataset.
We specifically design a new EMERT architecture to concurrently enhance performance in both ER and FER.
arXiv Detail & Related papers (2024-11-08T04:53:55Z) - Emotion Detection through Body Gesture and Face [0.0]
The project addresses the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures.
Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language.
The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way.
arXiv Detail & Related papers (2024-07-13T15:15:50Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - Domain Adaptation for Facial Expression Classifier via Domain
Discrimination and Gradient Reversal [0.0]
The research in the field of Facial Expression Recognition (FER) has acquired increased interest over the past decade.
We propose a new architecture for the task of FER and examine the impact of domain discrimination loss regularization on the learning process.
arXiv Detail & Related papers (2021-06-02T20:58:24Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.