Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints
- URL: http://arxiv.org/abs/2006.12246v1
- Date: Wed, 17 Jun 2020 00:18:29 GMT
- Title: Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints
- Authors: Matthew Lee, Lyndon Kennedy, Andreas Girgensohn, Lynn Wilcox, John
Song En Lee, Chin Wen Tan, Ban Leong Sng
- Abstract summary: Managing post-surgical pain is critical for successful surgical outcomes.
One of the challenges of pain management is accurately assessing the pain level of patients.
We introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.
- Score: 1.6402428190800593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Managing post-surgical pain is critical for successful surgical outcomes. One
of the challenges of pain management is accurately assessing the pain level of
patients. Self-reported numeric pain ratings are limited because they are
subjective, can be affected by mood, and can influence the patient's perception
of pain when making comparisons. In this paper, we introduce an approach that
analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their
pain intensity level. Our approach leverages the previously unexplored
capabilities of a smartphone to capture a dense 3D representation of a person's
face as input for pain intensity level estimation. Our contributions are adata
collection study with post-surgical patients to collect ground-truth labeled
sequences of 2D and 3D facial keypoints for developing a pain estimation
algorithm, a pain estimation model that uses multiple instance learning to
overcome inherent limitations in facial keypoint sequences, and the preliminary
results of the pain estimation model using 2D and 3D features with comparisons
of alternate approaches.
Related papers
- Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy [0.5242869847419834]
Existing measures, relying on direct observation by caregivers, lack sensitivity and specificity.
Ten neural networks were trained on three pain image databases.
InceptionV3 exhibited promising performance on the CP-PAIN dataset.
arXiv Detail & Related papers (2024-01-22T17:55:16Z) - Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging [16.146223377936035]
We introduce the Adaptive temporal Dynamic Image (AHDI) technique.
AHDI encodes deep changes in facial videos into singular RGB image, permitting application simpler 2D models for video representation.
Within this framework, we employ a residual network to derive generalized facial representations.
These representations are optimized for two tasks: estimating pain intensity and differentiating between genuine and simulated pain expressions.
arXiv Detail & Related papers (2023-12-12T01:23:05Z) - Uncertainty Quantification in Neural-Network Based Pain Intensity
Estimation [0.0]
The evaluation of pain intensity is challenging because different individuals experience pain differently.
This study presents a neural network-based method for objective pain interval estimation.
arXiv Detail & Related papers (2023-11-14T22:14:07Z) - Pain Detection in Masked Faces during Procedural Sedation [0.0]
Pain monitoring is essential to the quality of care for patients undergoing a medical procedure with sedation.
Previous studies have shown the viability of computer vision methods in detecting pain in unoccluded faces.
This study has collected video data from masked faces of 14 patients undergoing procedures in an interventional radiology department.
arXiv Detail & Related papers (2022-11-12T15:55:33Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - Leveraging Human Selective Attention for Medical Image Analysis with
Limited Training Data [72.1187887376849]
The selective attention mechanism helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors.
We propose a framework to leverage gaze for medical image analysis tasks with small training data.
Our method is demonstrated to achieve superior performance on both 3D tumor segmentation and 2D chest X-ray classification tasks.
arXiv Detail & Related papers (2021-12-02T07:55:25Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Unobtrusive Pain Monitoring in Older Adults with Dementia using Pairwise
and Contrastive Training [3.7775543603998907]
Although pain is frequent in old age, older adults are often undertreated for pain.
This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia.
We present the first fully automated vision-based technique validated on a dementia cohort.
arXiv Detail & Related papers (2021-01-08T23:28:30Z) - Deep Learning-Based Human Pose Estimation: A Survey [66.01917727294163]
Human pose estimation has drawn increasing attention during the past decade.
It has been utilized in a wide range of applications including human-computer interaction, motion analysis, augmented reality, and virtual reality.
Recent deep learning-based solutions have achieved high performance in human pose estimation.
arXiv Detail & Related papers (2020-12-24T18:49:06Z) - Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine
Framework and Its Adversarial Examples [74.92488215859991]
We propose a novel 3D-based coarse-to-fine framework to efficiently tackle these challenges.
The proposed 3D-based framework outperforms their 2D counterparts by a large margin since it can leverage the rich spatial information along all three axes.
We conduct experiments on three datasets, the NIH pancreas dataset, the JHMI pancreas dataset and the JHMI pathological cyst dataset.
arXiv Detail & Related papers (2020-10-29T15:39:19Z) - Volumetric Attention for 3D Medical Image Segmentation and Detection [53.041572035020344]
A volumetric attention(VA) module for 3D medical image segmentation and detection is proposed.
VA attention is inspired by recent advances in video processing, enables 2.5D networks to leverage context information along the z direction.
Its integration in the Mask R-CNN is shown to enable state-of-the-art performance on the Liver Tumor (LiTS) Challenge.
arXiv Detail & Related papers (2020-04-04T18:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.