Lightweight Hopfield Neural Networks for Bioacoustic Detection and Call Monitoring of Captive Primates
- URL: http://arxiv.org/abs/2511.11615v1
- Date: Tue, 04 Nov 2025 17:46:03 GMT
- Title: Lightweight Hopfield Neural Networks for Bioacoustic Detection and Call Monitoring of Captive Primates
- Authors: Wendy Lomas, Andrew Gascoyne, Colin Dubreuil, Stefano Vaglio, Liam Naughton,
- Abstract summary: We present a transparent, lightweight and fast-to-train associative memory AI model with Hopfield neural network architecture.<n>Adapted from a model developed to detect bat echolocation calls, this model monitors captive endangered black-and-white ruffed lemur Varecia variegata vocalisations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Passive acoustic monitoring is a sustainable method of monitoring wildlife and environments that leads to the generation of large datasets and, currently, a processing backlog. Academic research into automating this process is focused on the application of resource intensive convolutional neural networks which require large pre-labelled datasets for training and lack flexibility in application. We present a viable alternative relevant in both wild and captive settings; a transparent, lightweight and fast-to-train associative memory AI model with Hopfield neural network (HNN) architecture. Adapted from a model developed to detect bat echolocation calls, this model monitors captive endangered black-and-white ruffed lemur Varecia variegata vocalisations. Lemur social calls of interest when monitoring welfare are stored in the HNN in order to detect other call instances across the larger acoustic dataset. We make significant model improvements by storing an additional signal caused by movement and achieve an overall accuracy of 0.94. The model can perform $340$ classifications per second, processing over 5.5 hours of audio data per minute, on a standard laptop running other applications. It has broad applicability and trains in milliseconds. Our lightweight solution reduces data-to-insight turnaround times and can accelerate decision making in both captive and wild settings.
Related papers
- Self-Supervised Learning via Flow-Guided Neural Operator on Time-Series Data [57.85958428020496]
Flow-Guided Neural Operator (FGNO) is a novel framework combining operator learning with flow matching for SSL training.<n>FGNO learns mappings in functional spaces by using Short-Time Fourier Transform to unify different time resolutions.<n>Unlike prior generative SSL methods that use noisy inputs during inference, we propose using clean inputs for representation extraction while learning representations with noise.
arXiv Detail & Related papers (2026-02-12T18:54:57Z) - Identification of Capture Phases in Nanopore Protein Sequencing Data Using a Deep Learning Model [0.0]
We develop a lightweight one-dimensional convolutional neural network (1D CNN) to detect capture phases in down-sampled signal windows.<n>Our best model, CaptureNet-Deep, achieved an F1 score of 0.94 and precision of 93.39% on held-out test data.<n>These results show that efficient, real-time capture detection is possible using simple, interpretable architectures.
arXiv Detail & Related papers (2025-11-03T06:51:53Z) - First-of-its-kind AI model for bioacoustic detection using a lightweight associative memory Hopfield neural network [0.0]
A growing issue within conservation bioacoustics is the task of analysing the vast amount of data generated from passive acoustic monitoring devices.<n>Our model formulation addresses the key issues encountered when using current AI models for bioacoustic analysis.<n>It uses associative memory via a transparent, explainable Hopfield neural network to store signals and detect similar signals.
arXiv Detail & Related papers (2025-07-14T16:37:20Z) - Neuromorphic Wireless Split Computing with Resonate-and-Fire Neurons [69.73249913506042]
This paper investigates a wireless split computing architecture that employs resonate-and-fire (RF) neurons to process time-domain signals directly.<n>By resonating at tunable frequencies, RF neurons extract time-localized spectral features while maintaining low spiking activity.<n> Experimental results show that the proposed RF-SNN architecture achieves comparable accuracy to conventional LIF-SNNs and ANNs.
arXiv Detail & Related papers (2025-06-24T21:14:59Z) - Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.<n>Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.<n>We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - Towards Vision Mixture of Experts for Wildlife Monitoring on the Edge [13.112893692624768]
TinyML' community is actively proposing methods to save communication bandwidth and excessive cloud storage costs.
We explore similar per patch conditional computation for the first time for mobile vision transformers.
We evaluate the model on Cornell Sap Sucker Woods 60, a fine grained bird species discrimination dataset.
arXiv Detail & Related papers (2024-11-12T14:36:06Z) - EAS-SNN: End-to-End Adaptive Sampling and Representation for Event-based Detection with Recurrent Spiking Neural Networks [14.046487518350792]
Spiking Neural Networks (SNNs) operate on an event-driven through sparse spike communication.
We introduce Residual Potential Dropout (RPD) and Spike-Aware Training (SAT) to regulate potential distribution.
Our method yields a 4.4% mAP improvement on the Gen1 dataset, while requiring 38% fewer parameters and only three time steps.
arXiv Detail & Related papers (2024-03-19T09:34:11Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Deep Learning-based Cattle Activity Classification Using Joint
Time-frequency Data Representation [2.472770436480857]
In this paper, a sequential deep neural network is used to develop a behavioural model and to classify cattle behaviour and activities.
The key focus of this paper is the exploration of a joint time-frequency domain representation of the sensor data.
Our exploration is based on a real-world data set with over 3 million samples, collected from sensors with a tri-axial accelerometer, magnetometer and gyroscope.
arXiv Detail & Related papers (2020-11-06T14:24:55Z) - A Generative Learning Approach for Spatio-temporal Modeling in Connected
Vehicular Network [55.852401381113786]
This paper proposes LaMI (Latency Model Inpainting), a novel framework to generate a comprehensive-temporal quality framework for wireless access latency of connected vehicles.
LaMI adopts the idea from image inpainting and synthesizing and can reconstruct the missing latency samples by a two-step procedure.
In particular, it first discovers the spatial correlation between samples collected in various regions using a patching-based approach and then feeds the original and highly correlated samples into a Varienational Autocoder (VAE)
arXiv Detail & Related papers (2020-03-16T03:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.