I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition
- URL: http://arxiv.org/abs/2210.17421v1
- Date: Fri, 28 Oct 2022 16:28:26 GMT
- Title: I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition
- Authors: Doreen Jirak, Alessandra Sciutti, Pablo Barros, Francesco Rea
- Abstract summary: We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
- Score: 65.69256728493015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-robot interaction (HRI) benefits greatly from advances in the machine
learning field as it allows researchers to employ high-performance models for
perceptual tasks like detection and recognition. Especially deep learning
models, either pre-trained for feature extraction or used for classification,
are now established methods to characterize human behaviors in HRI scenarios
and to have social robots that understand better those behaviors. As HRI
experiments are usually small-scale and constrained to particular lab
environments, the questions are how well can deep learning models generalize to
specific interaction scenarios, and further, how good is their robustness
towards environmental changes? These questions are important to address if the
HRI field wishes to put social robotic companions into real environments acting
consistently, i.e. changing lighting conditions or moving people should still
produce the same recognition results. In this paper, we study the impact of
different image conditions on the recognition of arousal and valence from human
facial expressions using the FaceChannel framework \cite{Barro20}. Our results
show how the interpretation of human affective states can differ greatly in
either the positive or negative direction even when changing only slightly the
image properties. We conclude the paper with important points to consider when
employing deep learning models to ensure sound interpretation of HRI
experiments.
Related papers
- Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors [63.194053817609024]
We introduce eye behaviors as an important emotional cues for the creation of a new Eye-behavior-aided Multimodal Emotion Recognition dataset.
For the first time, we provide annotations for both Emotion Recognition (ER) and Facial Expression Recognition (FER) in the EMER dataset.
We specifically design a new EMERT architecture to concurrently enhance performance in both ER and FER.
arXiv Detail & Related papers (2024-11-08T04:53:55Z) - Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers [0.0]
We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
arXiv Detail & Related papers (2023-01-26T02:29:17Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Affect-DML: Context-Aware One-Shot Recognition of Human Affect using
Deep Metric Learning [29.262204241732565]
Existing methods assume that all emotions-of-interest are given a priori as annotated training examples.
We conceptualize one-shot recognition of emotions in context -- a new problem aimed at recognizing human affect states in finer particle level from a single support sample.
All variants of our model clearly outperform the random baseline, while leveraging the semantic scene context consistently improves the learnt representations.
arXiv Detail & Related papers (2021-11-30T10:35:20Z) - Domain Adaptation for Facial Expression Classifier via Domain
Discrimination and Gradient Reversal [0.0]
The research in the field of Facial Expression Recognition (FER) has acquired increased interest over the past decade.
We propose a new architecture for the task of FER and examine the impact of domain discrimination loss regularization on the learning process.
arXiv Detail & Related papers (2021-06-02T20:58:24Z) - EEG-based Texture Roughness Classification in Active Tactile Exploration
with Invariant Representation Learning Networks [8.021411285905849]
Multiple cortical brain regions are responsible for sensory recognition, perception and motor execution during sensorimotor processing.
Main goal of our work is to discriminate textured surfaces varying in their roughness levels during active tactile exploration.
We use an adversarial invariant representation learning neural network architecture that performs EEG-based classification of different textured surfaces.
arXiv Detail & Related papers (2021-02-17T19:07:13Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.