I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition
- URL: http://arxiv.org/abs/2104.08353v1
- Date: Fri, 16 Apr 2021 20:03:30 GMT
- Title: I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition
- Authors: Pablo Barros, Alessandra Sciutti
- Abstract summary: We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
- Score: 78.07239208222599
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The current COVID-19 pandemic has shown us that we are still facing
unpredictable challenges in our society. The necessary constrain on social
interactions affected heavily how we envision and prepare the future of social
robots and artificial agents in general. Adapting current affective perception
models towards constrained perception based on the hard separation between
facial perception and affective understanding would help us to provide robust
systems. In this paper, we perform an in-depth analysis of how recognizing
affect from persons with masks differs from general facial expression
perception. We evaluate how the recently proposed FaceChannel adapts towards
recognizing facial expressions from persons with masks. In Our analysis, we
evaluate different training and fine-tuning schemes to understand better the
impact of masked facial expressions. We also perform specific feature-level
visualization to demonstrate how the inherent capabilities of the FaceChannel
to learn and combine facial features change when in a constrained social
interaction scenario.
Related papers
- ComFace: Facial Representation Learning with Synthetic Data for Comparing Faces [5.07975834105566]
We propose a facial representation learning method using synthetic images for comparing faces, called ComFace.
For effective representation learning, ComFace aims to acquire two feature representations, i.e., inter-personal facial differences and intra-personal facial changes.
Our ComFace, trained using only synthetic data, achieves comparable to or better transfer performance than general pre-training and state-of-the-art representation learning methods trained using real images.
arXiv Detail & Related papers (2024-05-25T02:44:07Z) - Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms [4.814908894876767]
Face swapping algorithms place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process.
We propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more.
arXiv Detail & Related papers (2024-02-05T16:53:54Z) - Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network [0.0]
Face expression is used in CNN to categorize the acquired picture into different emotion categories.
This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images.
arXiv Detail & Related papers (2023-05-11T14:38:27Z) - Medical Face Masks and Emotion Recognition from the Body: Insights from
a Deep Learning Perspective [31.55798962786664]
The COVID-19 pandemic has forced people to extensively wear medical face masks, in order to prevent transmission.
This paper conducts insightful studies about the effect of face occlusion on emotion recognition performance.
We utilize a deep learning model based on the Temporal Segment Network framework, and aspire to fully overcome the face mask consequences.
arXiv Detail & Related papers (2023-02-20T15:07:24Z) - Robustness Disparities in Face Detection [64.71318433419636]
We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models.
Across all the datasets and systems, we generally find that photos of individuals who are $textitmasculine presenting$, of $textitolder$, of $textitdarker skin type$, or have $textitdim lighting$ are more susceptible to errors than their counterparts in other identities.
arXiv Detail & Related papers (2022-11-29T05:22:47Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.