Modeling Dynamics of Facial Behavior for Mental Health Assessment
- URL: http://arxiv.org/abs/2108.09934v1
- Date: Mon, 23 Aug 2021 05:08:45 GMT
- Title: Modeling Dynamics of Facial Behavior for Mental Health Assessment
- Authors: Minh Tran, Ellen Bradley, Michelle Matvey, Joshua Woolley, Mohammad
Soleymani
- Abstract summary: We explore the possibility of representing the dynamics of facial expressions by adopting algorithms used for word representation in natural language processing.
We perform clustering on a large dataset of temporal facial expressions with 5.3M frames before applying the Global Vector representation (GloVe) algorithm to learn the embeddings of the facial clusters.
We evaluate the usefulness of our learned representations on two downstream tasks: schizophrenia symptom severity estimation and depression regression.
- Score: 4.130361751085622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial action unit (FAU) intensities are popular descriptors for the analysis
of facial behavior. However, FAUs are sparsely represented when only a few are
activated at a time. In this study, we explore the possibility of representing
the dynamics of facial expressions by adopting algorithms used for word
representation in natural language processing. Specifically, we perform
clustering on a large dataset of temporal facial expressions with 5.3M frames
before applying the Global Vector representation (GloVe) algorithm to learn the
embeddings of the facial clusters. We evaluate the usefulness of our learned
representations on two downstream tasks: schizophrenia symptom estimation and
depression severity regression. These experimental results show the potential
effectiveness of our approach for improving the assessment of mental health
symptoms over baseline models that use FAU intensities alone.
Related papers
- Faces of the Mind: Unveiling Mental Health States Through Facial Expressions in 11,427 Adolescents [12.51443153354506]
Mood disorders, including depression and anxiety, often manifest through facial expressions.
We analyzed facial videos of 11,427 participants, a dataset two orders of magnitude larger than previous studies.
arXiv Detail & Related papers (2024-05-30T14:02:40Z) - Robust Light-Weight Facial Affective Behavior Recognition with CLIP [12.368133562194267]
Human affective behavior analysis aims to delve into human expressions and behaviors to deepen our understanding of human emotions.
Existing approaches in expression classification and AU detection often necessitate complex models and substantial computational resources.
We introduce the first lightweight framework adept at efficiently tackling both expression classification and AU detection.
arXiv Detail & Related papers (2024-03-14T23:21:55Z) - EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition [10.411186945517148]
We propose a novel vision-language model that uses sample-level text descriptions as natural language supervision.
Our findings show that this approach yields significant improvements when compared to baseline methods.
We evaluate the representations obtained from the network trained using sample-level descriptions on the downstream task of mental health symptom estimation.
arXiv Detail & Related papers (2023-10-25T13:43:36Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Frame-level Prediction of Facial Expressions, Valence, Arousal and
Action Units for Mobile Devices [7.056222499095849]
We propose the novel frame-level emotion recognition algorithm by extracting facial features with the single EfficientNet model pre-trained on AffectNet.
Our approach may be implemented even for video analytics on mobile devices.
arXiv Detail & Related papers (2022-03-25T03:53:27Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z) - Quantified Facial Expressiveness for Affective Behavior Analytics [0.0]
We propose an algorithm that quantifies facial expressiveness using a bounded, continuous expressiveness score using multimodal facial features.
The proposed algorithm can compute the expressiveness in terms of discrete expression, and can be used to perform tasks including facial behavior tracking and subjectivity in context.
arXiv Detail & Related papers (2021-10-05T00:21:33Z) - FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in
the Wild [50.8865921538953]
We propose a method to explicitly incorporate facial semantics into age estimation.
We design a face parsing-based network to learn semantic information at different scales.
We show that our method consistently outperforms all existing age estimation methods.
arXiv Detail & Related papers (2021-06-21T14:31:32Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.