Video-based estimation of pain indicators in dogs
- URL: http://arxiv.org/abs/2209.13296v1
- Date: Tue, 27 Sep 2022 10:38:59 GMT
- Title: Video-based estimation of pain indicators in dogs
- Authors: Hongyi Zhu, Yasemin Salg{\i}rl{\i}, P{\i}nar Can, Durmu\c{s}
At{\i}lgan, Albert Ali Salah
- Abstract summary: We propose a novel video-based, two-stream deep neural network approach for this problem.
We extract and preprocess body keypoints, and compute features from both keypoints and the RGB representation over the video.
We present a unique video-based dog behavior dataset, collected by veterinary professionals, and annotated for presence of pain.
- Score: 2.7103996289794217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dog owners are typically capable of recognizing behavioral cues that reveal
subjective states of their dogs, such as pain. But automatic recognition of the
pain state is very challenging. This paper proposes a novel video-based,
two-stream deep neural network approach for this problem. We extract and
preprocess body keypoints, and compute features from both keypoints and the RGB
representation over the video. We propose an approach to deal with
self-occlusions and missing keypoints. We also present a unique video-based dog
behavior dataset, collected by veterinary professionals, and annotated for
presence of pain, and report good classification results with the proposed
approach. This study is one of the first works on machine learning based
estimation of dog pain state.
Related papers
- PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions [57.871692507044344]
Pose estimation aims to accurately identify anatomical keypoints in humans and animals using monocular images.
Current models are typically trained and tested on clean data, potentially overlooking the corruption during real-world deployment.
We introduce PoseBench, a benchmark designed to evaluate the robustness of pose estimation models against real-world corruption.
arXiv Detail & Related papers (2024-06-20T14:40:17Z) - From Forest to Zoo: Great Ape Behavior Recognition with ChimpBehave [0.0]
We introduce ChimpBehave, a novel dataset featuring over 2 hours of video (approximately 193,000 video frames) of zoo-housed chimpanzees.
ChimpBehave meticulously annotated with bounding boxes and behavior labels for action recognition.
We benchmark our dataset using a state-of-the-art CNN-based action recognition model.
arXiv Detail & Related papers (2024-05-30T13:11:08Z) - Computer Vision for Primate Behavior Analysis in the Wild [61.08941894580172]
Video-based behavioral monitoring has great potential for transforming how we study animal cognition and behavior.
There is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today.
arXiv Detail & Related papers (2024-01-29T18:59:56Z) - CNN-Based Action Recognition and Pose Estimation for Classifying Animal
Behavior from Videos: A Survey [0.0]
Action recognition, classifying activities performed by one or more subjects in a trimmed video, forms the basis of many techniques.
Deep learning models for human action recognition have progressed over the last decade.
Recent interest in research that incorporates deep learning-based action recognition for classification has increased.
arXiv Detail & Related papers (2023-01-15T20:54:44Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Self-supervised Video Representation Learning by Uncovering
Spatio-temporal Statistics [74.6968179473212]
This paper proposes a novel pretext task to address the self-supervised learning problem.
We compute a series of partitioning-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion.
A neural network is built and trained to yield the statistical summaries given the video frames as inputs.
arXiv Detail & Related papers (2020-08-31T08:31:56Z) - Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial
Keypoints [1.6402428190800593]
Managing post-surgical pain is critical for successful surgical outcomes.
One of the challenges of pain management is accurately assessing the pain level of patients.
We introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.
arXiv Detail & Related papers (2020-06-17T00:18:29Z) - Visual Identification of Individual Holstein-Friesian Cattle via Deep
Metric Learning [8.784100314325395]
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems.
This work takes advantage of these natural markings in order to automate visual detection and biometric identification of individual Holstein-Friesians via convolutional neural networks and deep metric learning techniques.
arXiv Detail & Related papers (2020-06-16T14:41:55Z) - Identifying Individual Dogs in Social Media Images [1.14219428942199]
The work described here is part of joint project done with Pet2Net, a social network focused on pets and their owners.
In order to detect and recognize individual dogs we combine transfer learning and object detection approaches on Inception v3 and SSD Inception v2 architectures.
We show that it can achieve 94.59% accuracy in identifying individual dogs.
arXiv Detail & Related papers (2020-03-14T21:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.