Learning Emotional-Blinded Face Representations
- URL: http://arxiv.org/abs/2009.08704v1
- Date: Fri, 18 Sep 2020 09:24:10 GMT
- Title: Learning Emotional-Blinded Face Representations
- Authors: Alejandro Pe\~na and Julian Fierrez and Agata Lapedriza and Aythami
Morales
- Abstract summary: We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
- Score: 77.7653702071127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose two face representations that are blind to facial expressions
associated to emotional responses. This work is in part motivated by new
international regulations for personal data protection, which enforce data
controllers to protect any kind of sensitive information involved in automatic
processes. The advances in Affective Computing have contributed to improve
human-machine interfaces but, at the same time, the capacity to monitorize
emotional responses triggers potential risks for humans, both in terms of
fairness and privacy. We propose two different methods to learn these
expression-blinded facial features. We show that it is possible to eliminate
information related to emotion recognition tasks, while the performance of
subject verification, gender recognition, and ethnicity classification are just
slightly affected. We also present an application to train fairer classifiers
in a case study of attractiveness classification with respect to a protected
facial expression attribute. The results demonstrate that it is possible to
reduce emotional information in the face representation while retaining
competitive performance in other face-based artificial intelligence tasks.
Related papers
- Leaving Some Facial Features Behind [0.0]
This study examines how specific facial features influence emotion classification, using facial perturbations on the Fer2013 dataset.
Models trained on data with the removal of some important facial feature experienced up to an 85% accuracy drop when compared to baseline for emotions like happy and surprise.
arXiv Detail & Related papers (2024-10-29T02:28:53Z) - Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network [0.0]
Face expression is used in CNN to categorize the acquired picture into different emotion categories.
This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images.
arXiv Detail & Related papers (2023-05-11T14:38:27Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.