Quantified Facial Temporal-Expressiveness Dynamics for Affect Analysis
- URL: http://arxiv.org/abs/2010.14705v1
- Date: Wed, 28 Oct 2020 02:22:22 GMT
- Title: Quantified Facial Temporal-Expressiveness Dynamics for Affect Analysis
- Authors: Md Taufeeq Uddin, Shaun Canavan
- Abstract summary: We propose quantified facial Temporal-expressiveness Dynamics (TED) to quantify the expressiveness of human faces.
We show that TED can be used for high-level tasks such as summarization of unstructured visual data, and expectation from and interpretation of automated affect recognition models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quantification of visual affect data (e.g. face images) is essential to
build and monitor automated affect modeling systems efficiently. Considering
this, this work proposes quantified facial Temporal-expressiveness Dynamics
(TED) to quantify the expressiveness of human faces. The proposed algorithm
leverages multimodal facial features by incorporating static and dynamic
information to enable accurate measurements of facial expressiveness. We show
that TED can be used for high-level tasks such as summarization of unstructured
visual data, and expectation from and interpretation of automated affect
recognition models. To evaluate the positive impact of using TED, a case study
was conducted on spontaneous pain using the UNBC-McMaster spontaneous shoulder
pain dataset. Experimental results show the efficacy of using TED for
quantified affect analysis.
Related papers
- CCFExp: Facial Image Synthesis with Cycle Cross-Fusion Diffusion Model for Facial Paralysis Individuals [3.2688425993442696]
This study aims to synthesize a high-quality facial paralysis dataset to address this gap.
A novel Cycle Cross-Fusion Expression Generative Model (CCFExp) based on the diffusion model is proposed.
We have qualitatively and quantitatively evaluated the proposed method on the commonly used public clinical datasets of facial paralysis.
arXiv Detail & Related papers (2024-09-11T13:46:35Z) - UniLearn: Enhancing Dynamic Facial Expression Recognition through Unified Pre-Training and Fine-Tuning on Images and Videos [83.48170683672427]
UniLearn is a unified learning paradigm that integrates static facial expression recognition data to enhance DFER task.
UniLearn consistently state-of-the-art performance on FERV39K, MAFW, and DFEW benchmarks, with weighted average recall (WAR) of 53.65%, 58.44%, and 76.68%, respectively.
arXiv Detail & Related papers (2024-09-10T01:57:57Z) - BigSmall: Efficient Multi-Task Learning for Disparate Spatial and
Temporal Physiological Measurements [28.573472322978507]
We present BigSmall, an efficient architecture for physiological and behavioral measurement.
We propose a multi-branch network with wrapping temporal shift modules that yields both accuracy and efficiency gains.
arXiv Detail & Related papers (2023-03-21T03:41:57Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Quantified Facial Expressiveness for Affective Behavior Analytics [0.0]
We propose an algorithm that quantifies facial expressiveness using a bounded, continuous expressiveness score using multimodal facial features.
The proposed algorithm can compute the expressiveness in terms of discrete expression, and can be used to perform tasks including facial behavior tracking and subjectivity in context.
arXiv Detail & Related papers (2021-10-05T00:21:33Z) - Modeling Dynamics of Facial Behavior for Mental Health Assessment [4.130361751085622]
We explore the possibility of representing the dynamics of facial expressions by adopting algorithms used for word representation in natural language processing.
We perform clustering on a large dataset of temporal facial expressions with 5.3M frames before applying the Global Vector representation (GloVe) algorithm to learn the embeddings of the facial clusters.
We evaluate the usefulness of our learned representations on two downstream tasks: schizophrenia symptom severity estimation and depression regression.
arXiv Detail & Related papers (2021-08-23T05:08:45Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - A Multi-term and Multi-task Analyzing Framework for Affective Analysis
in-the-wild [0.2216657815393579]
We introduce the affective recognition method that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2020 Contest.
Since affective behaviors have many observable features that have their own time frames, we introduced multiple optimized time windows.
We generated affective recognition models for each time window and ensembled these models together.
arXiv Detail & Related papers (2020-09-29T09:24:29Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.