Unsupervised learning of Data-driven Facial Expression Coding System (DFECS) using keypoint tracking
- URL: http://arxiv.org/abs/2406.05434v1
- Date: Sat, 8 Jun 2024 10:45:38 GMT
- Title: Unsupervised learning of Data-driven Facial Expression Coding System (DFECS) using keypoint tracking
- Authors: Shivansh Chandra Tripathi, Rahul Garg,
- Abstract summary: We propose an unsupervised learning of an automated facial coding system by leveraging computer-vision-based facial keypoint tracking.
Results show that DFECS AUs estimated from the DISFA dataset can account for an average variance of up to 91.29 percent in test datasets.
87.5 percent of DFECS AUs are interpretable, i.e., align with the direction of facial muscle movements.
- Score: 3.0605062268685868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of existing facial coding systems, such as the Facial Action Coding System (FACS), relied on manual examination of facial expression videos for defining Action Units (AUs). To overcome the labor-intensive nature of this process, we propose the unsupervised learning of an automated facial coding system by leveraging computer-vision-based facial keypoint tracking. In this novel facial coding system called the Data-driven Facial Expression Coding System (DFECS), the AUs are estimated by applying dimensionality reduction to facial keypoint movements from a neutral frame through a proposed Full Face Model (FFM). FFM employs a two-level decomposition using advanced dimensionality reduction techniques such as dictionary learning (DL) and non-negative matrix factorization (NMF). These techniques enhance the interpretability of AUs by introducing constraints such as sparsity and positivity to the encoding matrix. Results show that DFECS AUs estimated from the DISFA dataset can account for an average variance of up to 91.29 percent in test datasets (CK+ and BP4D-Spontaneous) and also surpass the variance explained by keypoint-based equivalents of FACS AUs in these datasets. Additionally, 87.5 percent of DFECS AUs are interpretable, i.e., align with the direction of facial muscle movements. In summary, advancements in automated facial coding systems can accelerate facial expression analysis across diverse fields such as security, healthcare, and entertainment. These advancements offer numerous benefits, including enhanced detection of abnormal behavior, improved pain analysis in healthcare settings, and enriched emotion-driven interactions. To facilitate further research, the code repository of DFECS has been made publicly accessible.
Related papers
- Discrete Facial Encoding: : A Framework for Data-driven Facial Display Discovery [6.096726247356906]
We introduce Discrete Facial, an unsupervised, data-driven alternative of compact and interpretable dictionary of facial expressions.<n>Our system consistently outperforms both FACS-based pipelines and strong image and video representation learning models.<n>Our representation covers a wider variety of facial displays, highlighting its potential as a scalable and effective alternative to FACS for psychological and affective computing applications.
arXiv Detail & Related papers (2025-10-02T04:44:45Z) - A Deep Learning Approach for Facial Attribute Manipulation and Reconstruction in Surveillance and Reconnaissance [5.980822697955566]
Surveillance systems play a critical role in security and reconnaissance, but their performance is often compromised by low-quality images and videos.<n>Existing AI-based facial analysis models suffer from biases related to skin tone variations and partially occluded faces.<n>We propose a data-driven platform that enhances surveillance capabilities by generating synthetic training data tailored to compensate for dataset biases.
arXiv Detail & Related papers (2025-06-06T23:09:17Z) - Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism [3.0274846041592864]
The Facial Action Coding System (FACS) has been used by numerous studies to investigate the links between facial behavior and mental health.<n>Despite intense efforts spanning three decades, the detection accuracy for many Action Units is considered to be below the threshold needed for behavioral research.<n>This paper proposes a new coding system that mimics the key properties of FACS.
arXiv Detail & Related papers (2025-05-30T15:06:01Z) - Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data [49.25159192831934]
Source-free domain adaptation (SFDA) methods are employed to adapt a pre-trained source model using only unlabeled target domain data.
This paper introduces the Disentangled Source-Free Domain Adaptation (DSFDA) method to address the SFDA challenge posed by missing target expression data.
Our method learns to disentangle features related to expressions and identity while generating the missing non-neutral target data.
arXiv Detail & Related papers (2025-03-26T17:53:53Z) - A PCA based Keypoint Tracking Approach to Automated Facial Expressions Encoding [3.0605062268685868]
This paper explores the use of automated techniques to generate Action Units (AUs) for studying facial expressions.
We propose an unsupervised approach based on Principal Component Analysis (PCA) and facial keypoint tracking to generate data-driven AUs.
arXiv Detail & Related papers (2024-06-13T11:40:26Z) - Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - SAFER: Situation Aware Facial Emotion Recognition [0.0]
We present SAFER, a novel system for emotion recognition from facial expressions.
It employs state-of-the-art deep learning techniques to extract various features from facial images.
It can adapt to unseen and varied facial expressions, making it suitable for real-world applications.
arXiv Detail & Related papers (2023-06-14T20:42:26Z) - A Survey on Computer Vision based Human Analysis in the COVID-19 Era [58.79053747159797]
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals.
Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications.
These developments triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication
arXiv Detail & Related papers (2022-11-07T17:20:39Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Self-supervised Contrastive Learning of Multi-view Facial Expressions [9.949781365631557]
Facial expression recognition (FER) has emerged as an important component of human-computer interaction systems.
We propose Contrastive Learning of Multi-view facial Expressions (CL-MEx) to exploit facial images captured simultaneously from different angles towards FER.
arXiv Detail & Related papers (2021-08-15T11:23:34Z) - A Computer Vision System to Help Prevent the Transmission of COVID-19 [79.62140902232628]
The COVID-19 pandemic affects every area of daily life globally.
Health organizations advise social distancing, wearing face mask, and avoiding touching face.
We developed a deep learning-based computer vision system to help prevent the transmission of COVID-19.
arXiv Detail & Related papers (2021-03-16T00:00:04Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.