CowScreeningDB: A public benchmark dataset for lameness detection in dairy cows
- URL: http://arxiv.org/abs/2405.15550v1
- Date: Fri, 24 May 2024 13:36:00 GMT
- Title: CowScreeningDB: A public benchmark dataset for lameness detection in dairy cows
- Authors: Shahid Ismail, Moises Diaz, Cristina Carmona-Duarte, Jose Manuel Vilar, Miguel A. Ferrer,
- Abstract summary: This dataset was sourced from 43 cows at a dairy located in Gran Canaria, Spain.
It consists of a multi-sensor dataset built on data collected using an Apple Watch 6 during the normal daily routine of a dairy cow.
Aside from the public sharing of the dataset, we have also shared a machine-learning technique which classifies the caws in healthy and lame by using the raw sensory data.
- Score: 3.506897386829711
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Lameness is one of the costliest pathological problems affecting dairy animals. It is usually assessed by trained veterinary clinicians who observe features such as gait symmetry or gait parameters as step counts in real-time. With the development of artificial intelligence, various modular systems have been proposed to minimize subjectivity in lameness assessment. However, the major limitation in their development is the unavailability of a public dataset which is currently either commercial or privately held. To tackle this limitation, we have introduced CowScreeningDB which was created using sensory data. This dataset was sourced from 43 cows at a dairy located in Gran Canaria, Spain. It consists of a multi-sensor dataset built on data collected using an Apple Watch 6 during the normal daily routine of a dairy cow. Thanks to the collection environment, sampling technique, information regarding the sensors, the applications used for data conversion and storage make the dataset a transparent one. This transparency of data can thus be used for further development of techniques for lameness detection for dairy cows which can be objectively compared. Aside from the public sharing of the dataset, we have also shared a machine-learning technique which classifies the caws in healthy and lame by using the raw sensory data. Hence validating the major objective which is to establish the relationship between sensor data and lameness.
Related papers
- CattleEyeView: A Multi-task Top-down View Cattle Dataset for Smarter
Precision Livestock Farming [6.291219495092237]
We introduce CattleEyeView dataset, the first top-down view multi-task cattle video dataset.
The dataset contains 753 distinct top-down cow instances in 30,703 frames.
We perform benchmark experiments to evaluate the model's performance for each task.
arXiv Detail & Related papers (2023-12-14T09:18:02Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Segmentation Enhanced Lameness Detection in Dairy Cows from RGB and
Depth Video [8.906235809404189]
Early lameness detection helps farmers address illnesses early and avoid negative effects caused by the degeneration of cows' condition.
We collected a dataset of short clips of cows exiting a milking station and annotated the degree of lameness of the cows.
We proposed a lameness detection method that leverages pre-trained neural networks to extract discriminative features from videos and assign a binary score to each cow indicating its condition: "healthy" or "lame"
arXiv Detail & Related papers (2022-06-09T12:16:31Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - T-LEAP: occlusion-robust pose estimation of walking cows using temporal
information [0.0]
Lameness, a prevalent health disorder in dairy cows, is commonly detected by analyzing the gait of cows.
A cow's gait can be tracked in videos using pose estimation models because models learn to automatically localize anatomical landmarks in images and videos.
Most animal pose estimation models are static, that is, videos are processed frame by frame and do not use any temporal information.
arXiv Detail & Related papers (2021-04-16T10:50:56Z) - Pretrained equivariant features improve unsupervised landmark discovery [69.02115180674885]
We formulate a two-step unsupervised approach that overcomes this challenge by first learning powerful pixel-based features.
Our method produces state-of-the-art results in several challenging landmark detection datasets.
arXiv Detail & Related papers (2021-04-07T05:42:11Z) - Deep Learning-based Cattle Activity Classification Using Joint
Time-frequency Data Representation [2.472770436480857]
In this paper, a sequential deep neural network is used to develop a behavioural model and to classify cattle behaviour and activities.
The key focus of this paper is the exploration of a joint time-frequency domain representation of the sensor data.
Our exploration is based on a real-world data set with over 3 million samples, collected from sensors with a tri-axial accelerometer, magnetometer and gyroscope.
arXiv Detail & Related papers (2020-11-06T14:24:55Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Machine learning approaches for identifying prey handling activity in
otariid pinnipeds [12.814241588031685]
This paper focuses on the identification of prey handling activity in seals.
Data taken into consideration are streams of 3D accelerometers and depth sensors values collected by devices attached directly on seals.
We propose an automatic model based on Machine Learning (ML) algorithms.
arXiv Detail & Related papers (2020-02-10T15:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.