EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and
Bodily Expressions
- URL: http://arxiv.org/abs/2001.07739v3
- Date: Mon, 9 Mar 2020 16:14:31 GMT
- Title: EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and
Bodily Expressions
- Authors: Joy O. Egede, Siyang Song, Temitayo A. Olugbade, Chongyang Wang,
Amanda Williams, Hongying Meng, Min Aung, Nicholas D. Lane, Michel Valstar
and Nadia Bianchi-Berthouze
- Abstract summary: EmoPain 2020 Challenge is the first international competition aimed at creating a uniform platform for the comparison of machine learning and multimedia processing methods.
This paper presents a description of the challenge, competition guidelines, bench-marking dataset, and the baseline systems' architecture and performance.
- Score: 10.48692251648146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The EmoPain 2020 Challenge is the first international competition aimed at
creating a uniform platform for the comparison of machine learning and
multimedia processing methods of automatic chronic pain assessment from human
expressive behaviour, and also the identification of pain-related behaviours.
The objective of the challenge is to promote research in the development of
assistive technologies that help improve the quality of life for people with
chronic pain via real-time monitoring and feedback to help manage their
condition and remain physically active. The challenge also aims to encourage
the use of the relatively underutilised, albeit vital bodily expression signals
for automatic pain and pain-related emotion recognition. This paper presents a
description of the challenge, competition guidelines, bench-marking dataset,
and the baseline systems' architecture and performance on the three sub-tasks:
pain estimation from facial expressions, pain recognition from multimodal
movement, and protective movement behaviour detection.
Related papers
- Transformer with Leveraged Masked Autoencoder for video-based Pain Assessment [11.016004057765185]
We enhance pain recognition by employing facial video analysis within a Transformer-based deep learning model.
By combining a powerful Masked Autoencoder with a Transformers-based classifier, our model effectively captures pain level indicators through both expressions and micro-expressions.
arXiv Detail & Related papers (2024-09-08T13:14:03Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy [0.5242869847419834]
Existing measures, relying on direct observation by caregivers, lack sensitivity and specificity.
Ten neural networks were trained on three pain image databases.
InceptionV3 exhibited promising performance on the CP-PAIN dataset.
arXiv Detail & Related papers (2024-01-22T17:55:16Z) - A Survey on Computer Vision based Human Analysis in the COVID-19 Era [58.79053747159797]
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals.
Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications.
These developments triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication
arXiv Detail & Related papers (2022-11-07T17:20:39Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - One-shot action recognition towards novel assistive therapies [63.23654147345168]
This work is motivated by the automated analysis of medical therapies that involve action imitation games.
The presented approach incorporates a pre-processing step that standardizes heterogeneous motion data conditions.
We evaluate the approach on a real use-case of automated video analysis for therapy support with autistic people.
arXiv Detail & Related papers (2021-02-17T19:41:37Z) - Unobtrusive Pain Monitoring in Older Adults with Dementia using Pairwise
and Contrastive Training [3.7775543603998907]
Although pain is frequent in old age, older adults are often undertreated for pain.
This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia.
We present the first fully automated vision-based technique validated on a dementia cohort.
arXiv Detail & Related papers (2021-01-08T23:28:30Z) - Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints [1.6402428190800593]
Managing post-surgical pain is critical for successful surgical outcomes.
One of the challenges of pain management is accurately assessing the pain level of patients.
We introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.
arXiv Detail & Related papers (2020-06-17T00:18:29Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.