Equine Pain Behavior Classification via Self-Supervised Disentangled
Pose Representation
- URL: http://arxiv.org/abs/2108.13258v1
- Date: Mon, 30 Aug 2021 14:17:46 GMT
- Title: Equine Pain Behavior Classification via Self-Supervised Disentangled
Pose Representation
- Authors: Maheen Rashid, Sofia Broom\'e, Katrina Ask, Elin Hernlund, Pia Haubro
Andersen, Hedvig Kjellstr\"om, Yong Jae Lee
- Abstract summary: Timely detection of horse pain is important for equine welfare.
Horses express pain through their facial and body behavior, but may hide signs of pain from unfamiliar human observers.
This paper proposes a pragmatic equine pain classification system using video of the unobserved horse and weak labels.
- Score: 21.702208866220236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Timely detection of horse pain is important for equine welfare. Horses
express pain through their facial and body behavior, but may hide signs of pain
from unfamiliar human observers. In addition, collecting visual data with
detailed annotation of horse behavior and pain state is both cumbersome and not
scalable. Consequently, a pragmatic equine pain classification system would use
video of the unobserved horse and weak labels. This paper proposes such a
method for equine pain classification by using multi-view surveillance video
footage of unobserved horses with induced orthopaedic pain, with temporally
sparse video level pain labels. To ensure that pain is learned from horse body
language alone, we first train a self-supervised generative model to
disentangle horse pose from its appearance and background before using the
disentangled horse pose latent representation for pain classification. To make
best use of the pain labels, we develop a novel loss that formulates pain
classification as a multi-instance learning problem. Our method achieves pain
classification accuracy better than human expert performance with 60% accuracy.
The learned latent horse pose representation is shown to be viewpoint
covariant, and disentangled from horse appearance. Qualitative analysis of pain
classified segments shows correspondence between the pain symptoms identified
by our model, and equine pain scales used in veterinary practice.
Related papers
- PainSeeker: An Automated Method for Assessing Pain in Rats Through
Facial Expressions [14.003480681631226]
We present a dataset called RatsPain consisting of 1,138 facial images captured from six rats that underwent an orthodontic treatment operation.
We then proposed a novel deep learning method called PainSeeker for automatically assessing pain in rats via facial expressions.
PainSeeker aims to seek pain-related facial local regions that facilitate learning both pain discriminative and head pose robust features from facial expression images.
arXiv Detail & Related papers (2023-11-06T15:49:11Z) - Find Someone Who: Visual Commonsense Understanding in Human-Centric
Grounding [87.39245901710079]
We present a new commonsense task, Human-centric Commonsense Grounding.
It tests the models' ability to ground individuals given the context descriptions about what happened before.
We set up a context-object-aware method as a strong baseline that outperforms previous pre-trained and non-pretrained models.
arXiv Detail & Related papers (2022-12-14T01:37:16Z) - CLAMP: Prompt-based Contrastive Learning for Connecting Language and
Animal Pose [70.59906971581192]
We introduce a novel prompt-based Contrastive learning scheme for connecting Language and AniMal Pose effectively.
The CLAMP attempts to bridge the gap by adapting the text prompts to the animal keypoints during network training.
Experimental results show that our method achieves state-of-the-art performance under the supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2022-06-23T14:51:42Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - Leveraging Human Selective Attention for Medical Image Analysis with
Limited Training Data [72.1187887376849]
The selective attention mechanism helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors.
We propose a framework to leverage gaze for medical image analysis tasks with small training data.
Our method is demonstrated to achieve superior performance on both 3D tumor segmentation and 2D chest X-ray classification tasks.
arXiv Detail & Related papers (2021-12-02T07:55:25Z) - Chronic Pain and Language: A Topic Modelling Approach to Personal Pain
Descriptions [0.688204255655161]
Chronic pain is recognized as a major health problem, with impacts not only at the economic, but also at the social, and individual levels.
Being a private and subjective experience, it is impossible to externally and impartially experience, describe, and interpret chronic pain as a purely noxious stimulus.
We propose and discuss a topic modelling approach to recognize patterns in verbal descriptions of chronic pain, and use these patterns to quantify and qualify experiences of pain.
arXiv Detail & Related papers (2021-09-01T14:31:16Z) - Sharing Pain: Using Domain Transfer Between Pain Types for Recognition
of Sparse Pain Expressions in Horses [1.749935196721634]
Orthopedic disorders are a common cause for euthanasia among horses.
It is challenging to train a visual pain recognition method with video data depicting such pain.
We show that transferring features from a dataset of horses with acute nociceptive pain can aid the learning to recognize more complex orthopedic pain.
arXiv Detail & Related papers (2021-05-21T12:35:00Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Unobtrusive Pain Monitoring in Older Adults with Dementia using Pairwise
and Contrastive Training [3.7775543603998907]
Although pain is frequent in old age, older adults are often undertreated for pain.
This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia.
We present the first fully automated vision-based technique validated on a dementia cohort.
arXiv Detail & Related papers (2021-01-08T23:28:30Z) - How Much Does It Hurt: A Deep Learning Framework for Chronic Pain Score
Assessment [4.463811772756938]
We propose an end-to-end deep learning framework for chronic pain score assessment.
Our framework splits the long time-course data samples into shorter sequences, and uses Consensus Prediction to classify the results.
We evaluate the performance of our framework on two chronic pain score datasets.
arXiv Detail & Related papers (2020-09-22T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.