Multi-modal Sensor Data Fusion for In-situ Classification of Animal
Behavior Using Accelerometry and GNSS Data
- URL: http://arxiv.org/abs/2206.12078v1
- Date: Fri, 24 Jun 2022 04:54:03 GMT
- Title: Multi-modal Sensor Data Fusion for In-situ Classification of Animal
Behavior Using Accelerometry and GNSS Data
- Authors: Reza Arablouei, Ziwei Wang, Greg J. Bishop-Hurley, Jiajun Liu
- Abstract summary: We examine using data from multiple sensing modes, i.e., accelerometry and global navigation satellite system (GNSS) for classifying animal behavior.
We develop multi-modal animal behavior classification algorithms using two real-world datasets collected via smart cattle collar and ear tags.
- Score: 16.47484520898938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine using data from multiple sensing modes, i.e., accelerometry and
global navigation satellite system (GNSS), for classifying animal behavior. We
extract three new features from the GNSS data, namely, the distance from the
water point, median speed, and median estimated horizontal position error. We
consider two approaches for combining the information available from the
accelerometry and GNSS data. The first approach is based on concatenating the
features extracted from both sensor data and feeding the concatenated feature
vector into a multi-layer perceptron (MLP) classifier. The second approach is
based on fusing the posterior probabilities predicted by two MLP classifiers
each taking the features extracted from the data of one sensor as input. We
evaluate the performance of the developed multi-modal animal behavior
classification algorithms using two real-world datasets collected via smart
cattle collar and ear tags. The leave-one-animal-out cross-validation results
show that both approaches improve the classification performance appreciably
compared with using the data from only one sensing mode, in particular, for the
infrequent but important behaviors of walking and drinking. The algorithms
developed based on both approaches require rather small computational and
memory resources hence are suitable for implementation on embedded systems of
our collar and ear tags. However, the multi-modal animal behavior
classification algorithm based on posterior probability fusion is preferable to
the one based on feature concatenation as it delivers better classification
accuracy, has less computational and memory complexity, is more robust to
sensor data failure, and enjoys better modularity.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Animal Behavior Classification via Deep Learning on Embedded Systems [10.160218445628836]
We develop an end-to-end deep-neural-network-based algorithm for classifying animal behavior using accelerometry data.
We implement the proposed algorithm on the embedded system of the collar tag's AIoT device to perform in-situ classification of animal behavior.
arXiv Detail & Related papers (2021-11-24T06:26:15Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - Single-stage intake gesture detection using CTC loss and extended prefix
beam search [8.22379888383833]
Accurate detection of individual intake gestures is a key step towards automatic dietary monitoring.
We propose a single-stage approach which directly decodes the probabilities learned from sensor data into sparse intake detections.
arXiv Detail & Related papers (2020-08-07T06:04:25Z) - SL-DML: Signal Level Deep Metric Learning for Multimodal One-Shot Action
Recognition [0.0]
We propose a metric learning approach to reduce the action recognition problem to a nearest neighbor search in embedding space.
We encode signals into images and extract features using a deep residual CNN.
The resulting encoder transforms features into an embedding space in which closer distances encode similar actions while higher distances encode different actions.
arXiv Detail & Related papers (2020-04-23T11:28:27Z) - EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with
Cascade Refinement [53.69674636044927]
We present EHSOD, an end-to-end hybrid-supervised object detection system.
It can be trained in one shot on both fully and weakly-annotated data.
It achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data.
arXiv Detail & Related papers (2020-02-18T08:04:58Z) - Machine learning approaches for identifying prey handling activity in
otariid pinnipeds [12.814241588031685]
This paper focuses on the identification of prey handling activity in seals.
Data taken into consideration are streams of 3D accelerometers and depth sensors values collected by devices attached directly on seals.
We propose an automatic model based on Machine Learning (ML) algorithms.
arXiv Detail & Related papers (2020-02-10T15:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.