Animal Behavior Classification via Accelerometry Data and Recurrent
Neural Networks
- URL: http://arxiv.org/abs/2111.12843v1
- Date: Wed, 24 Nov 2021 23:28:25 GMT
- Title: Animal Behavior Classification via Accelerometry Data and Recurrent
Neural Networks
- Authors: Liang Wang, Reza Arablouei, Flavio A. P. Alvarenga, Greg J.
Bishop-Hurley
- Abstract summary: We study the classification of animal behavior using accelerometry data through various recurrent neural network (RNN) models.
We evaluate the classification performance and complexity of the considered models.
We also include two state-of-the-art convolutional neural network (CNN)-based time-series classification models in the evaluations.
- Score: 11.099308746733028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the classification of animal behavior using accelerometry data
through various recurrent neural network (RNN) models. We evaluate the
classification performance and complexity of the considered models, which
feature long short-time memory (LSTM) or gated recurrent unit (GRU)
architectures with varying depths and widths, using four datasets acquired from
cattle via collar or ear tags. We also include two state-of-the-art
convolutional neural network (CNN)-based time-series classification models in
the evaluations. The results show that the RNN-based models can achieve similar
or higher classification accuracy compared with the CNN-based models while
having less computational and memory requirements. We also observe that the
models with GRU architecture generally outperform the ones with LSTM
architecture in terms of classification accuracy despite being less complex. A
single-layer uni-directional GRU model with 64 hidden units appears to offer a
good balance between accuracy and complexity making it suitable for
implementation on edge/embedded devices.
Related papers
- A model for multi-attack classification to improve intrusion detection
performance using deep learning approaches [0.0]
The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks.
Deep learning based solution framework is developed consisting of three approaches.
The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta.
The models self-learnt the features and classifies the attack classes as multi-attack classification.
arXiv Detail & Related papers (2023-10-25T05:38:44Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks [7.358050500046429]
We study the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty.
Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model.
arXiv Detail & Related papers (2023-03-06T17:53:42Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - In-situ animal behavior classification using knowledge distillation and
fixed-point quantization [6.649514998517633]
We take a deep and complex convolutional neural network, known as residual neural network (ResNet), as the teacher model.
We implement both unquantized and quantized versions of the developed KD-based models on the embedded systems of our purpose-built collar and ear tag devices.
arXiv Detail & Related papers (2022-09-09T06:07:17Z) - Modelling Neuronal Behaviour with Time Series Regression: Recurrent
Neural Networks on C. Elegans Data [0.0]
We show how the nervous system of C. Elegans can be modelled and simulated with data-driven models using different neural network architectures.
We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the system's response to very different stimuli.
arXiv Detail & Related papers (2021-07-01T10:39:30Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.