AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit
- URL: http://arxiv.org/abs/2202.12059v1
- Date: Thu, 24 Feb 2022 12:27:49 GMT
- Title: AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit
- Authors: Mina Bishay, Kenneth Preston, Matthew Strafuss, Graham Page, Jay
Turcot and Mohammad Mavadati
- Abstract summary: AFFDEX 2.0 is a toolkit for analyzing facial expressions in the wild.
It can estimate the 3D head pose, detect facial Action Units (AUs), recognize basic emotions and 2 new emotional states (sentimentality and confusion)
AFFDEX 2.0 can process multiple faces in real time, and is working across the Windows and Linux platforms.
- Score: 1.076535942003539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we introduce AFFDEX 2.0 - a toolkit for analyzing facial
expressions in the wild, that is, it is intended for users aiming to; a)
estimate the 3D head pose, b) detect facial Action Units (AUs), c) recognize
basic emotions and 2 new emotional states (sentimentality and confusion), and
d) detect high-level expressive metrics like blink and attention. AFFDEX 2.0
models are mainly based on Deep Learning, and are trained using a large-scale
naturalistic dataset consisting of thousands of participants from different
demographic groups. AFFDEX 2.0 is an enhanced version of our previous toolkit
[1], that is capable of tracking efficiently faces at more challenging
conditions, detecting more accurately facial expressions, and recognizing new
emotional states (sentimentality and confusion). AFFDEX 2.0 can process
multiple faces in real time, and is working across the Windows and Linux
platforms.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos [88.08209394979178]
Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations.
We introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features.
arXiv Detail & Related papers (2023-12-09T03:16:09Z) - LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis [7.185007035384591]
We introduce LibreFace, an open-source toolkit for facial expression analysis.
It offers real-time and offline analysis of facial behavior through deep learning models.
Our model also demonstrates competitive performance to state-of-the-art facial expression analysis methods.
arXiv Detail & Related papers (2023-08-18T00:33:29Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Towards a General Deep Feature Extractor for Facial Expression
Recognition [5.012963825796511]
We propose a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset.
DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
arXiv Detail & Related papers (2022-01-19T18:42:23Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Real-time Facial Expression Recognition "In The Wild'' by Disentangling
3D Expression from Identity [6.974241731162878]
This paper proposes a novel method for human emotion recognition from a single RGB image.
We construct a large-scale dataset of facial videos, rich in facial dynamics, identities, expressions, appearance and 3D pose variations.
Our proposed framework runs at 50 frames per second and is capable of robustly estimating parameters of 3D expression variation.
arXiv Detail & Related papers (2020-05-12T01:32:55Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.