Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy
- URL: http://arxiv.org/abs/2401.12161v1
- Date: Mon, 22 Jan 2024 17:55:16 GMT
- Title: Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy
- Authors: \'Alvaro Sabater-G\'arriz, F. Xavier Gaya-Morey, Jos\'e Mar\'ia
Buades-Rubio, Cristina Manresa Yee, Pedro Montoya, Inmaculada Riquelme
- Abstract summary: Existing measures, relying on direct observation by caregivers, lack sensitivity and specificity.
Ten neural networks were trained on three pain image databases.
InceptionV3 exhibited promising performance on the CP-PAIN dataset.
- Score: 0.5242869847419834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Pain assessment in individuals with neurological conditions,
especially those with limited self-report ability and altered facial
expressions, presents challenges. Existing measures, relying on direct
observation by caregivers, lack sensitivity and specificity. In cerebral palsy,
pain is a common comorbidity and a reliable evaluation protocol is crucial.
Thus, having an automatic system that recognizes facial expressions could be of
enormous help when diagnosing pain in this type of patient.
Objectives: 1) to build a dataset of facial pain expressions in individuals
with cerebral palsy, and 2) to develop an automated facial recognition system
based on deep learning for pain assessment addressed to this population.
Methods: Ten neural networks were trained on three pain image databases,
including the UNBC-McMaster Shoulder Pain Expression Archive Database, the
Multimodal Intensity Pain Dataset, and the Delaware Pain Database.
Additionally, a curated dataset (CPPAIN) was created, consisting of 109
preprocessed facial pain expression images from individuals with cerebral
palsy, categorized by two physiotherapists using the Facial Action Coding
System observational scale.
Results: InceptionV3 exhibited promising performance on the CP-PAIN dataset,
achieving an accuracy of 62.67% and an F1 score of 61.12%. Explainable
artificial intelligence techniques revealed consistent essential features for
pain identification across models.
Conclusion: This study demonstrates the potential of deep learning models for
robust pain detection in populations with neurological conditions and
communication disabilities. The creation of a larger dataset specific to
cerebral palsy would further enhance model accuracy, offering a valuable tool
for discerning subtle and idiosyncratic pain expressions. The insights gained
could extend to other complex neurological conditions.
Related papers
- OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging [16.146223377936035]
We introduce the Adaptive temporal Dynamic Image (AHDI) technique.
AHDI encodes deep changes in facial videos into singular RGB image, permitting application simpler 2D models for video representation.
Within this framework, we employ a residual network to derive generalized facial representations.
These representations are optimized for two tasks: estimating pain intensity and differentiating between genuine and simulated pain expressions.
arXiv Detail & Related papers (2023-12-12T01:23:05Z) - Pain Detection in Masked Faces during Procedural Sedation [0.0]
Pain monitoring is essential to the quality of care for patients undergoing a medical procedure with sedation.
Previous studies have shown the viability of computer vision methods in detecting pain in unoccluded faces.
This study has collected video data from masked faces of 14 patients undergoing procedures in an interventional radiology department.
arXiv Detail & Related papers (2022-11-12T15:55:33Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Unobtrusive Pain Monitoring in Older Adults with Dementia using Pairwise
and Contrastive Training [3.7775543603998907]
Although pain is frequent in old age, older adults are often undertreated for pain.
This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia.
We present the first fully automated vision-based technique validated on a dementia cohort.
arXiv Detail & Related papers (2021-01-08T23:28:30Z) - Pain Assessment based on fNIRS using Bidirectional LSTMs [1.9654272166607836]
We propose the use of functional near-infrared spectroscopy (fNIRS) and deep learning for the assessment of human pain.
The aim of this study is to explore the use deep learning to automatically learn features from fNIRS raw data to reduce the level of subjectivity and domain knowledge required in the design of hand-crafted features.
arXiv Detail & Related papers (2020-12-24T12:55:39Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints [1.6402428190800593]
Managing post-surgical pain is critical for successful surgical outcomes.
One of the challenges of pain management is accurately assessing the pain level of patients.
We introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.
arXiv Detail & Related papers (2020-06-17T00:18:29Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.