Quantified Facial Expressiveness for Affective Behavior Analytics
- URL: http://arxiv.org/abs/2110.01758v2
- Date: Thu, 7 Oct 2021 14:55:24 GMT
- Title: Quantified Facial Expressiveness for Affective Behavior Analytics
- Authors: Md Taufeeq Uddin, Shaun Canavan
- Abstract summary: We propose an algorithm that quantifies facial expressiveness using a bounded, continuous expressiveness score using multimodal facial features.
The proposed algorithm can compute the expressiveness in terms of discrete expression, and can be used to perform tasks including facial behavior tracking and subjectivity in context.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quantified measurement of facial expressiveness is crucial to analyze
human affective behavior at scale. Unfortunately, methods for expressiveness
quantification at the video frame-level are largely unexplored, unlike the
study of discrete expression. In this work, we propose an algorithm that
quantifies facial expressiveness using a bounded, continuous expressiveness
score using multimodal facial features, such as action units (AUs), landmarks,
head pose, and gaze. The proposed algorithm more heavily weights AUs with high
intensities and large temporal changes. The proposed algorithm can compute the
expressiveness in terms of discrete expression, and can be used to perform
tasks including facial behavior tracking and subjectivity quantification in
context. Our results on benchmark datasets show the proposed algorithm is
effective in terms of capturing temporal changes and expressiveness, measuring
subjective differences in context, and extracting useful insight.
Related papers
- ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization [52.5587113539404]
We introduce a causality-aware entropy term that effectively identifies and prioritizes actions with high potential impacts for efficient exploration.
Our proposed algorithm, ACE: Off-policy Actor-critic with Causality-aware Entropy regularization, demonstrates a substantial performance advantage across 29 diverse continuous control tasks.
arXiv Detail & Related papers (2024-02-22T13:22:06Z) - Denoising Diffusion Semantic Segmentation with Mask Prior Modeling [61.73352242029671]
We propose to ameliorate the semantic segmentation quality of existing discriminative approaches with a mask prior modeled by a denoising diffusion generative model.
We evaluate the proposed prior modeling with several off-the-shelf segmentors, and our experimental results on ADE20K and Cityscapes demonstrate that our approach could achieve competitively quantitative performance.
arXiv Detail & Related papers (2023-06-02T17:47:01Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Frame-level Prediction of Facial Expressions, Valence, Arousal and
Action Units for Mobile Devices [7.056222499095849]
We propose the novel frame-level emotion recognition algorithm by extracting facial features with the single EfficientNet model pre-trained on AffectNet.
Our approach may be implemented even for video analytics on mobile devices.
arXiv Detail & Related papers (2022-03-25T03:53:27Z) - Modeling Dynamics of Facial Behavior for Mental Health Assessment [4.130361751085622]
We explore the possibility of representing the dynamics of facial expressions by adopting algorithms used for word representation in natural language processing.
We perform clustering on a large dataset of temporal facial expressions with 5.3M frames before applying the Global Vector representation (GloVe) algorithm to learn the embeddings of the facial clusters.
We evaluate the usefulness of our learned representations on two downstream tasks: schizophrenia symptom severity estimation and depression regression.
arXiv Detail & Related papers (2021-08-23T05:08:45Z) - Efficient Facial Expression Analysis For Dimensional Affect Recognition
Using Geometric Features [4.555179606623412]
We introduce a simple but effective facial expression analysis (FEA) system for dimensional affect.
The proposed approach is robust, efficient, and exhibits comparable performance to contemporary deep learning models.
arXiv Detail & Related papers (2021-06-15T00:28:16Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Quantified Facial Temporal-Expressiveness Dynamics for Affect Analysis [0.0]
We propose quantified facial Temporal-expressiveness Dynamics (TED) to quantify the expressiveness of human faces.
We show that TED can be used for high-level tasks such as summarization of unstructured visual data, and expectation from and interpretation of automated affect recognition models.
arXiv Detail & Related papers (2020-10-28T02:22:22Z) - Micro-Facial Expression Recognition Based on Deep-Rooted Learning
Algorithm [0.0]
An effective Micro-Facial Expression Based Deep-Rooted Learning (MFEDRL) classifier is proposed in this paper.
The performance of the algorithm will be evaluated using recognition rate and false measures.
arXiv Detail & Related papers (2020-09-12T12:23:27Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.