Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging
- URL: http://arxiv.org/abs/2312.06920v1
- Date: Tue, 12 Dec 2023 01:23:05 GMT
- Title: Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging
- Authors: Issam Serraoui, Eric Granger, Abdenour Hadid, Abdelmalik Taleb-Ahmed
- Abstract summary: We introduce the Adaptive temporal Dynamic Image (AHDI) technique.
AHDI encodes deep changes in facial videos into singular RGB image, permitting application simpler 2D models for video representation.
Within this framework, we employ a residual network to derive generalized facial representations.
These representations are optimized for two tasks: estimating pain intensity and differentiating between genuine and simulated pain expressions.
- Score: 16.146223377936035
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic pain intensity estimation plays a pivotal role in healthcare and
medical fields. While many methods have been developed to gauge human pain
using behavioral or physiological indicators, facial expressions have emerged
as a prominent tool for this purpose. Nevertheless, the dependence on labeled
data for these techniques often renders them expensive and time-consuming. To
tackle this, we introduce the Adaptive Hierarchical Spatio-temporal Dynamic
Image (AHDI) technique. AHDI encodes spatiotemporal changes in facial videos
into a singular RGB image, permitting the application of simpler 2D deep models
for video representation. Within this framework, we employ a residual network
to derive generalized facial representations. These representations are
optimized for two tasks: estimating pain intensity and differentiating between
genuine and simulated pain expressions. For the former, a regression model is
trained using the extracted representations, while for the latter, a binary
classifier identifies genuine versus feigned pain displays. Testing our method
on two widely-used pain datasets, we observed encouraging results for both
tasks. On the UNBC database, we achieved an MSE of 0.27 outperforming the SOTA
which had an MSE of 0.40. On the BioVid dataset, our model achieved an accuracy
of 89.76%, which is an improvement of 5.37% over the SOTA accuracy. Most
notably, for distinguishing genuine from simulated pain, our accuracy stands at
94.03%, marking a substantial improvement of 8.98%. Our methodology not only
minimizes the need for extensive labeled data but also augments the precision
of pain evaluations, facilitating superior pain management.
Related papers
- Transformer with Leveraged Masked Autoencoder for video-based Pain Assessment [11.016004057765185]
We enhance pain recognition by employing facial video analysis within a Transformer-based deep learning model.
By combining a powerful Masked Autoencoder with a Transformers-based classifier, our model effectively captures pain level indicators through both expressions and micro-expressions.
arXiv Detail & Related papers (2024-09-08T13:14:03Z) - Handling Geometric Domain Shifts in Semantic Segmentation of Surgical RGB and Hyperspectral Images [67.66644395272075]
We present first analysis of state-of-the-art semantic segmentation models when faced with geometric out-of-distribution data.
We propose an augmentation technique called "Organ Transplantation" to enhance generalizability.
Our augmentation technique improves SOA model performance by up to 67 % for RGB data and 90 % for HSI data, achieving performance at the level of in-distribution performance on real OOD test data.
arXiv Detail & Related papers (2024-08-27T19:13:15Z) - Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy [0.5242869847419834]
Existing measures, relying on direct observation by caregivers, lack sensitivity and specificity.
Ten neural networks were trained on three pain image databases.
InceptionV3 exhibited promising performance on the CP-PAIN dataset.
arXiv Detail & Related papers (2024-01-22T17:55:16Z) - Transformer Encoder with Multiscale Deep Learning for Pain
Classification Using Physiological Signals [0.0]
Pain is a subjective sensation-driven experience.
Traditional techniques for measuring pain intensity are susceptible to bias and unreliable in some instances.
We develop PainAttnNet, a novel transfomer-encoder deep-learning framework for classifying pain intensities with physiological signals as input.
arXiv Detail & Related papers (2023-03-13T04:21:33Z) - Pain level and pain-related behaviour classification using GRU-based
sparsely-connected RNNs [61.080598804629375]
People with chronic pain unconsciously adapt specific body movements to protect themselves from injury or additional pain.
Because there is no dedicated benchmark database to analyse this correlation, we considered one of the specific circumstances that potentially influence a person's biometrics during daily activities.
We proposed a sparsely-connected recurrent neural networks (s-RNNs) ensemble with the gated recurrent unit (GRU) that incorporates multiple autoencoders.
We conducted several experiments which indicate that the proposed method outperforms the state-of-the-art approaches in classifying both pain level and pain-related behaviour.
arXiv Detail & Related papers (2022-12-20T12:56:28Z) - Pain Detection in Masked Faces during Procedural Sedation [0.0]
Pain monitoring is essential to the quality of care for patients undergoing a medical procedure with sedation.
Previous studies have shown the viability of computer vision methods in detecting pain in unoccluded faces.
This study has collected video data from masked faces of 14 patients undergoing procedures in an interventional radiology department.
arXiv Detail & Related papers (2022-11-12T15:55:33Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints [1.6402428190800593]
Managing post-surgical pain is critical for successful surgical outcomes.
One of the challenges of pain management is accurately assessing the pain level of patients.
We introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.
arXiv Detail & Related papers (2020-06-17T00:18:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.