Efficient Facial Expression Analysis For Dimensional Affect Recognition
Using Geometric Features
- URL: http://arxiv.org/abs/2106.07817v1
- Date: Tue, 15 Jun 2021 00:28:16 GMT
- Title: Efficient Facial Expression Analysis For Dimensional Affect Recognition
Using Geometric Features
- Authors: Vassilios Vonikakis and Stefan Winkler
- Abstract summary: We introduce a simple but effective facial expression analysis (FEA) system for dimensional affect.
The proposed approach is robust, efficient, and exhibits comparable performance to contemporary deep learning models.
- Score: 4.555179606623412
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite their continued popularity, categorical approaches to affect
recognition have limitations, especially in real-life situations. Dimensional
models of affect offer important advantages for the recognition of subtle
expressions and more fine-grained analysis. We introduce a simple but effective
facial expression analysis (FEA) system for dimensional affect, solely based on
geometric features and Partial Least Squares (PLS) regression. The system
jointly learns to estimate Arousal and Valence ratings from a set of facial
images. The proposed approach is robust, efficient, and exhibits comparable
performance to contemporary deep learning models, while requiring a fraction of
the computational resources.
Related papers
- Semantic-Preserving Feature Partitioning for Multi-View Ensemble
Learning [11.415864885658435]
We introduce the Semantic-Preserving Feature Partitioning (SPFP) algorithm, a novel method grounded in information theory.
The SPFP algorithm effectively partitions datasets into multiple semantically consistent views, enhancing the multi-view ensemble learning process.
It maintains model accuracy while significantly improving uncertainty measures in scenarios where high generalization performance is achievable.
arXiv Detail & Related papers (2024-01-11T20:44:45Z) - Frame-level Prediction of Facial Expressions, Valence, Arousal and
Action Units for Mobile Devices [7.056222499095849]
We propose the novel frame-level emotion recognition algorithm by extracting facial features with the single EfficientNet model pre-trained on AffectNet.
Our approach may be implemented even for video analytics on mobile devices.
arXiv Detail & Related papers (2022-03-25T03:53:27Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Fair SA: Sensitivity Analysis for Fairness in Face Recognition [1.7149364927872013]
We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
arXiv Detail & Related papers (2022-02-08T01:16:09Z) - Quantified Facial Expressiveness for Affective Behavior Analytics [0.0]
We propose an algorithm that quantifies facial expressiveness using a bounded, continuous expressiveness score using multimodal facial features.
The proposed algorithm can compute the expressiveness in terms of discrete expression, and can be used to perform tasks including facial behavior tracking and subjectivity in context.
arXiv Detail & Related papers (2021-10-05T00:21:33Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Unsupervised low-rank representations for speech emotion recognition [78.38221758430244]
We examine the use of linear and non-linear dimensionality reduction algorithms for extracting low-rank feature representations for speech emotion recognition.
We report speech emotion recognition (SER) results for learned representations on two databases using different classification methods.
arXiv Detail & Related papers (2021-04-14T18:30:58Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.