Sheep Facial Pain Assessment Under Weighted Graph Neural Networks
- URL: http://arxiv.org/abs/2506.01468v1
- Date: Mon, 02 Jun 2025 09:24:09 GMT
- Title: Sheep Facial Pain Assessment Under Weighted Graph Neural Networks
- Authors: Alam Noor, Luis Almeida, Mohamed Daoudi, Kai Li, Eduardo Tovar,
- Abstract summary: We propose a novel weighted graph neural network (WGNN) model to link sheep's detected facial landmarks and define pain levels.<n>The YOLOv8n detector architecture achieves a mean average precision (mAP) of 59.30% with the sheep facial landmarks dataset.
- Score: 8.13128640016839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately recognizing and assessing pain in sheep is key to discern animal health and mitigating harmful situations. However, such accuracy is limited by the ability to manage automatic monitoring of pain in those animals. Facial expression scoring is a widely used and useful method to evaluate pain in both humans and other living beings. Researchers also analyzed the facial expressions of sheep to assess their health state and concluded that facial landmark detection and pain level prediction are essential. For this purpose, we propose a novel weighted graph neural network (WGNN) model to link sheep's detected facial landmarks and define pain levels. Furthermore, we propose a new sheep facial landmarks dataset that adheres to the parameters of the Sheep Facial Expression Scale (SPFES). Currently, there is no comprehensive performance benchmark that specifically evaluates the use of graph neural networks (GNNs) on sheep facial landmark data to detect and measure pain levels. The YOLOv8n detector architecture achieves a mean average precision (mAP) of 59.30% with the sheep facial landmarks dataset, among seven other detection models. The WGNN framework has an accuracy of 92.71% for tracking multiple facial parts expressions with the YOLOv8n lightweight on-board device deployment-capable model.
Related papers
- SynPAIN: A Synthetic Dataset of Pain and Non-Pain Facial Expressions [3.0806468055954737]
Existing pain detection datasets suffer from limited ethnic/racial diversity, privacy constraints, and underrepresentation of older adults.<n>We present SynPAIN, a large-scale synthetic dataset containing 10,710 facial expression images.<n>Using commercial generative AI tools, we created demographically balanced synthetic identities with clinically meaningful pain expressions.
arXiv Detail & Related papers (2025-07-25T20:54:04Z) - GraphAU-Pain: Graph-based Action Unit Representation for Pain Intensity Estimation [14.267177649888994]
Existing data-driven methods of detecting pain from facial expressions are limited due to interpretability and severity.<n>By utilizing a graph neural network, our framework offers improved interpretability and significant performance gains.<n>Experiments conducted on the publicly available UNBC dataset demonstrate the effectiveness of the GraphAU-Pain.
arXiv Detail & Related papers (2025-05-26T10:35:42Z) - Automated facial recognition system using deep learning for pain
assessment in adults with cerebral palsy [0.5242869847419834]
Existing measures, relying on direct observation by caregivers, lack sensitivity and specificity.
Ten neural networks were trained on three pain image databases.
InceptionV3 exhibited promising performance on the CP-PAIN dataset.
arXiv Detail & Related papers (2024-01-22T17:55:16Z) - Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging [16.146223377936035]
We introduce the Adaptive temporal Dynamic Image (AHDI) technique.
AHDI encodes deep changes in facial videos into singular RGB image, permitting application simpler 2D models for video representation.
Within this framework, we employ a residual network to derive generalized facial representations.
These representations are optimized for two tasks: estimating pain intensity and differentiating between genuine and simulated pain expressions.
arXiv Detail & Related papers (2023-12-12T01:23:05Z) - Automated Detection of Cat Facial Landmarks [8.435125986009881]
We present a novel dataset of cat facial images annotated with bounding boxes and 48 facial landmarks grounded in cat facial anatomy.
We introduce a landmark detection convolution neural network-based model which uses a magnifying ensembe method.
Our model shows excellent performance on cat faces and is generalizable to human facial landmark detection.
arXiv Detail & Related papers (2023-10-15T10:44:36Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Portuguese Man-of-War Image Classification with Convolutional Neural
Networks [58.720142291102135]
Portuguese man-of-war (PMW) is a gelatinous organism with long tentacles capable of causing severe burns.
This paper reports on the use of convolutional neural networks for recognizing PMW images from the Instagram social media.
arXiv Detail & Related papers (2022-07-04T03:06:45Z) - Intelligent Sight and Sound: A Chronic Cancer Pain Dataset [74.77784420691937]
This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial.
The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores.
Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain.
arXiv Detail & Related papers (2022-04-07T22:14:37Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Facial expression and attributes recognition based on multi-task
learning of lightweight neural networks [9.162936410696409]
We examine the multi-task training of lightweight convolutional neural networks for face identification and classification of facial attributes.
It is shown that it is still necessary to fine-tune these networks in order to predict facial expressions.
Several models are presented based on MobileNet, EfficientNet and RexNet architectures.
arXiv Detail & Related papers (2021-03-31T14:21:04Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.