Hugging Rain Man: A Novel Facial Action Units Dataset for Analyzing Atypical Facial Expressions in Children with Autism Spectrum Disorder
- URL: http://arxiv.org/abs/2411.13797v1
- Date: Thu, 21 Nov 2024 02:51:52 GMT
- Title: Hugging Rain Man: A Novel Facial Action Units Dataset for Analyzing Atypical Facial Expressions in Children with Autism Spectrum Disorder
- Authors: Yanfeng Ji, Shutong Wang, Ruyi Xu, Jingying Chen, Xinzhou Jiang, Zhengyu Deng, Yuxuan Quan, Junpeng Liu,
- Abstract summary: We introduce a novel dataset, Hugging Rain Man, which includes facial action units (AUs) manually annotated by FACS experts for both children with ASD and typical development (TD)
The dataset comprises a rich collection of posed and spontaneous facial expressions, totaling approximately 130,000 frames, along with 22 AUs, 10 Action Descriptors (ADs) and atypicality ratings.
- Score: 2.3001245059699014
- License:
- Abstract: Children with Autism Spectrum Disorder (ASD) often exhibit atypical facial expressions. However, the specific objective facial features that underlie this subjective perception remain unclear. In this paper, we introduce a novel dataset, Hugging Rain Man (HRM), which includes facial action units (AUs) manually annotated by FACS experts for both children with ASD and typical development (TD). The dataset comprises a rich collection of posed and spontaneous facial expressions, totaling approximately 130,000 frames, along with 22 AUs, 10 Action Descriptors (ADs), and atypicality ratings. A statistical analysis of static images from the HRM reveals significant differences between the ASD and TD groups across multiple AUs and ADs when displaying the same emotional expressions, confirming that participants with ASD tend to demonstrate more irregular and diverse expression patterns. Subsequently, a temporal regression method was presented to analyze atypicality of dynamic sequences, thereby bridging the gap between subjective perception and objective facial characteristics. Furthermore, baseline results for AU detection are provided for future research reference. This work not only contributes to our understanding of the unique facial expression characteristics associated with ASD but also provides potential tools for ASD early screening. Portions of the dataset, features, and pretrained models are accessible at: \url{https://github.com/Jonas-DL/Hugging-Rain-Man}.
Related papers
- Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction [9.251838958621684]
We propose a novel method to automatically analyze gaze behaviors in ASD children with superior accuracy.
We first apply and optimize seven clustering algorithms to automatically group gaze points to compare ASD subjects with typically developing peers.
Lastly, using these features as prior knowledge, we train multiple predictive machine learning models to predict and diagnose ASD based on their gaze behaviors.
arXiv Detail & Related papers (2024-09-18T06:56:06Z) - Ensemble Modeling of Multiple Physical Indicators to Dynamically Phenotype Autism Spectrum Disorder [3.6630139570443996]
We provide a dataset for training computer vision models to detect Autism Spectrum Disorder (ASD)-related phenotypic markers.
We trained individual LSTM-based models using eye gaze, head positions, and facial landmarks as input features, achieving test AUCs of 86%, 67%, and 78%.
arXiv Detail & Related papers (2024-08-23T17:55:58Z) - Contrastive Learning of Person-independent Representations for Facial
Action Unit Detection [70.60587475492065]
We formulate the self-supervised AU representation learning signals in two-fold.
We contrast learn the AU representation within a video clip and devise a cross-identity reconstruction mechanism to learn the person-independent representations.
Our method outperforms other contrastive learning methods and significantly closes the performance gap between the self-supervised and supervised AU detection approaches.
arXiv Detail & Related papers (2024-03-06T01:49:28Z) - Screening Autism Spectrum Disorder in childrens using Deep Learning
Approach : Evaluating the classification model of YOLOv8 by comparing with
other models [0.0]
We propose a practical solution for ASD screening using facial images using YoloV8 model.
Our model achieved a remarkable 89.64% accuracy in classification and an F1-score of 0.89.
arXiv Detail & Related papers (2023-06-25T18:02:01Z) - Exploiting the Brain's Network Structure for Automatic Identification of
ADHD Subjects [70.37277191524755]
We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects.
We train our classifier with 776 subjects and test on 171 subjects provided by The Neuro Bureau for the ADHD-200 challenge.
arXiv Detail & Related papers (2023-06-15T16:22:57Z) - Improving Deep Facial Phenotyping for Ultra-rare Disorder Verification
Using Model Ensembles [52.77024349608834]
We analyze the influence of replacing a DCNN with a state-of-the-art face recognition approach, iResNet with ArcFace.
Our proposed ensemble model achieves state-of-the-art performance on both seen and unseen disorders.
arXiv Detail & Related papers (2022-11-12T23:28:54Z) - A Federated Learning Scheme for Neuro-developmental Disorders:
Multi-Aspect ASD Detection [2.7221938979891385]
Autism Spectrum Disorder (ASD) is a neuro-developmental syndrome resulting from alterations in the embryological brain before birth.
We propose a privacy-preserving federated learning scheme to predict ASD in a certain individual based on their behavioral and facial features.
arXiv Detail & Related papers (2022-10-31T13:56:36Z) - ViTASD: Robust Vision Transformer Baselines for Autism Spectrum Disorder
Facial Diagnosis [6.695640702099725]
Autism spectrum disorder (ASD) is a lifelong neurodevelopmental disorder with very high prevalence around the world.
We propose the use of the Vision Transformer (ViT) for the computational analysis of pediatric ASD.
The presented model, known as ViTASD, distills knowledge from large facial expression datasets and offers model structure transferability.
arXiv Detail & Related papers (2022-10-30T20:38:56Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.