Inferring User Facial Affect in Work-like Settings
- URL: http://arxiv.org/abs/2111.11862v1
- Date: Mon, 22 Nov 2021 01:23:46 GMT
- Title: Inferring User Facial Affect in Work-like Settings
- Authors: Chaudhary Muhammad Aqdus Ilyas, Siyang Song, Hatice Gunes
- Abstract summary: We aim to infer user facial affect when the user is engaged in multiple work-like tasks under varying difficulty levels.
We first design a study with different conditions and gather multimodal data from 12 subjects.
We then perform several experiments with various machine learning models and find that the display and prediction of facial affect vary from non-working to working settings.
- Score: 5.630425653717262
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unlike the six basic emotions of happiness, sadness, fear, anger, disgust and
surprise, modelling and predicting dimensional affect in terms of valence
(positivity - negativity) and arousal (intensity) has proven to be more
flexible, applicable and useful for naturalistic and real-world settings. In
this paper, we aim to infer user facial affect when the user is engaged in
multiple work-like tasks under varying difficulty levels (baseline, easy, hard
and stressful conditions), including (i) an office-like setting where they
undertake a task that is less physically demanding but requires greater mental
strain; (ii) an assembly-line-like setting that requires the usage of fine
motor skills; and (iii) an office-like setting representing teleworking and
teleconferencing. In line with this aim, we first design a study with different
conditions and gather multimodal data from 12 subjects. We then perform several
experiments with various machine learning models and find that: (i) the display
and prediction of facial affect vary from non-working to working settings; (ii)
prediction capability can be boosted by using datasets captured in a work-like
context; and (iii) segment-level (spectral representation) information is
crucial in improving the facial affect prediction.
Related papers
- What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Supervised Contrastive Learning for Affect Modelling [2.570570340104555]
We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
arXiv Detail & Related papers (2022-08-25T17:40:19Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Understanding top-down attention using task-oriented ablation design [0.22940141855172028]
Top-down attention allows neural networks, both artificial and biological, to focus on the information most relevant for a given task.
We aim to answer this with a computational experiment based on a general framework called task-oriented ablation design.
We compare the performance of two neural networks, one with top-down attention and one without.
arXiv Detail & Related papers (2021-06-08T21:01:47Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.