Accelerometer-Based Multivariate Time-Series Dataset for Calf Behavior Classification
- URL: http://arxiv.org/abs/2409.00053v1
- Date: Tue, 20 Aug 2024 08:11:54 GMT
- Title: Accelerometer-Based Multivariate Time-Series Dataset for Calf Behavior Classification
- Authors: Oshana Dissanayake, Sarah E. McPherson, Joseph Allyndree, Emer Kennedy, Padraig Cunningham, Lucile Riaboff,
- Abstract summary: This dataset is a ready-to-use dataset for classifying pre-weaned calf behaviour from the acceleration time series.
30 dairy calves were equipped with a 3D-accelerometer sensor attached to a neck-collar from one week of birth for 13 weeks.
- Score: 0.7366868731714773
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Getting new insights on pre-weaned calf behavioral adaptation to routine challenges (transport, group relocation, etc.) and diseases (respiratory diseases, diarrhea, etc.) is a promising way to improve calf welfare in dairy farms. A classic approach to automatically monitoring behavior is to equip animals with accelerometers attached to neck collars and to develop machine learning models from accelerometer time-series. However, to be used for model development, data must be equipped with labels. Obtaining these labels requires annotating behaviors from direct observation or videos, a time-consuming and labor-intensive process. To address this challenge, we propose the ActBeCalf (Accelerometer Time-Series for Calf Behaviour classification) dataset: 30 pre-weaned dairy calves (Holstein Friesian and Jersey) were equipped with a 3D-accelerometer sensor attached to a neck-collar from one week of birth for 13 weeks. The calves were simultaneously filmed with a camera in each pen. At the end of the trial, behaviors were manually annotated from the videos using the Behavioral Observation Research Interactive Software (BORIS) by 3 observers using an ethogram with 23 behaviors. ActBeCalf contains 27.4 hours of accelerometer data aligned adequately with calf behaviors. The dataset includes the main behaviors, like lying, standing, walking, and running, and less prominent behaviors, such as sniffing, social interaction, and grooming. Finally, ActBeCalf was used for behavior classification with machine learning models: (i)two classes of behaviors, [active and inactive; model 1] and (ii)four classes of behaviors [running, lying, drinking milk, and 'other' class; model 2] to demonstrate its reliability. We got a balanced accuracy of 92% [model1] and 84% [model2]. ActBeCalf is a comprehensive and ready-to-use dataset for classifying pre-weaned calf behaviour from the acceleration time series.
Related papers
- A Comparison of Deep Learning and Established Methods for Calf Behaviour Monitoring [0.0]
We report on research on animal activity recognition in support of welfare monitoring.
The data comes from collar-mounted accelerometer sensors worn by Holstein and Jersey calves.
A key requirement in detecting changes in behaviour is to be able to classify activities into classes.
arXiv Detail & Related papers (2024-08-23T13:02:24Z) - Development of a digital tool for monitoring the behaviour of pre-weaned calves using accelerometer neck-collars [0.0]
Thirty pre-weaned calves were equipped with a 3-D accelerometer attached to a neck-collar for two months and filmed simultaneously.
The behaviours were annotated, resulting in 27.4 hours of observation aligned with the accelerometer data.
Two machine learning models were tuned using data from 80% of the calves.
arXiv Detail & Related papers (2024-06-25T08:11:22Z) - Evaluating ROCKET and Catch22 features for calf behaviour classification from accelerometer data using Machine Learning models [0.7366868731714773]
30 Irish Holstein Friesian and Jersey pre-weaned calves were monitored using accelerometer sensors allowing for 27.4 hours of annotated behaviors.
Hand-crafted features are commonly used in Machine Learning models, while ROCKET and Catch22 features are specifically designed for time-series classification problems.
This study aims to compare the performance of ROCKET and Catch22 features to Hand-Crafted features.
arXiv Detail & Related papers (2024-04-28T12:23:01Z) - Refining Pre-Trained Motion Models [56.18044168821188]
We take on the challenge of improving state-of-the-art supervised models with self-supervised training.
We focus on obtaining a "clean" training signal from real-world unlabelled video.
We show that our method yields reliable gains over fully-supervised methods in real videos.
arXiv Detail & Related papers (2024-01-01T18:59:33Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - CVB: A Video Dataset of Cattle Visual Behaviors [13.233877352490923]
Existing datasets for cattle behavior recognition are mostly small, lack well-defined labels, or are collected in unrealistic controlled environments.
We introduce a new dataset, called Cattle Visual Behaviors (CVB), that consists of 502 video clips, each fifteen seconds long, captured in natural lighting conditions, and annotated with eleven visually perceptible behaviors of grazing cattle.
arXiv Detail & Related papers (2023-05-26T00:44:11Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Sedentary Behavior Estimation with Hip-worn Accelerometer Data:
Segmentation, Classification and Thresholding [1.9402357545481315]
Previous methods for estimating sedentary behavior based on hip-worn data are often invalid or suboptimal under free-living situations and subject-to-subject variation.
We propose a local Markov switching model that takes this situation into account, and introduce a general procedure for posture classification and sedentary behavior analysis that fits the model naturally.
arXiv Detail & Related papers (2022-07-05T05:01:11Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - Self-supervised Pretraining with Classification Labels for Temporal
Activity Detection [54.366236719520565]
Temporal Activity Detection aims to predict activity classes per frame.
Due to the expensive frame-level annotations required for detection, the scale of detection datasets is limited.
This work proposes a novel self-supervised pretraining method for detection leveraging classification labels.
arXiv Detail & Related papers (2021-11-26T18:59:28Z) - Multi-level Motion Attention for Human Motion Prediction [132.29963836262394]
We study the use of different types of attention, computed at joint, body part, and full pose levels.
Our experiments on Human3.6M, AMASS and 3DPW validate the benefits of our approach for both periodical and non-periodical actions.
arXiv Detail & Related papers (2021-06-17T08:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.