Learning Emotional-Blinded Face Representations
- URL: http://arxiv.org/abs/2009.08704v1
- Date: Fri, 18 Sep 2020 09:24:10 GMT
- Title: Learning Emotional-Blinded Face Representations
- Authors: Alejandro Pe\~na and Julian Fierrez and Agata Lapedriza and Aythami
Morales
- Abstract summary: We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
- Score: 77.7653702071127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose two face representations that are blind to facial expressions
associated to emotional responses. This work is in part motivated by new
international regulations for personal data protection, which enforce data
controllers to protect any kind of sensitive information involved in automatic
processes. The advances in Affective Computing have contributed to improve
human-machine interfaces but, at the same time, the capacity to monitorize
emotional responses triggers potential risks for humans, both in terms of
fairness and privacy. We propose two different methods to learn these
expression-blinded facial features. We show that it is possible to eliminate
information related to emotion recognition tasks, while the performance of
subject verification, gender recognition, and ethnicity classification are just
slightly affected. We also present an application to train fairer classifiers
in a case study of attractiveness classification with respect to a protected
facial expression attribute. The results demonstrate that it is possible to
reduce emotional information in the face representation while retaining
competitive performance in other face-based artificial intelligence tasks.
Related papers
- Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network [0.0]
Face expression is used in CNN to categorize the acquired picture into different emotion categories.
This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images.
arXiv Detail & Related papers (2023-05-11T14:38:27Z) - Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers [0.0]
We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
arXiv Detail & Related papers (2023-01-26T02:29:17Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Emotion Separation and Recognition from a Facial Expression by
Generating the Poker Face with Vision Transformers [57.67586172996843]
We propose a novel FER model, called Poker Face Vision Transformer or PF-ViT, to separate and recognize the disturbance-agnostic emotion from a static facial image.
PF-ViT generates its corresponding poker face without the need for paired images.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.