Exploring Spiking Neural Networks for Binary Classification in Multivariate Time Series at the Edge
- URL: http://arxiv.org/abs/2510.20997v1
- Date: Thu, 23 Oct 2025 20:52:11 GMT
- Title: Exploring Spiking Neural Networks for Binary Classification in Multivariate Time Series at the Edge
- Authors: James Ghawaly, Andrew Nicholson, Catherine Schuman, Dalton Diez, Aaron Young, Brett Witherspoon,
- Abstract summary: We present a general framework for training spiking neural networks (SNNs) to perform binary classification on multivariate time series.<n>We apply it to the task of detecting low signal-to-noise ratio radioactive sources in gamma-ray spectral data.<n>The resulting SNNs, with as few as 49 neurons and 66 synapses, achieve a 51.8% true positive rate (TPR) at a false alarm rate of 1/hr.<n> Hardware deployment on the microCaspian neuromorphic platform demonstrates 2mW power consumption and 20.2ms latency.
- Score: 0.9282545044546486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a general framework for training spiking neural networks (SNNs) to perform binary classification on multivariate time series, with a focus on step-wise prediction and high precision at low false alarm rates. The approach uses the Evolutionary Optimization of Neuromorphic Systems (EONS) algorithm to evolve sparse, stateful SNNs by jointly optimizing their architectures and parameters. Inputs are encoded into spike trains, and predictions are made by thresholding a single output neuron's spike counts. We also incorporate simple voting ensemble methods to improve performance and robustness. To evaluate the framework, we apply it with application-specific optimizations to the task of detecting low signal-to-noise ratio radioactive sources in gamma-ray spectral data. The resulting SNNs, with as few as 49 neurons and 66 synapses, achieve a 51.8% true positive rate (TPR) at a false alarm rate of 1/hr, outperforming PCA (42.7%) and deep learning (49.8%) baselines. A three-model any-vote ensemble increases TPR to 67.1% at the same false alarm rate. Hardware deployment on the microCaspian neuromorphic platform demonstrates 2mW power consumption and 20.2ms inference latency. We also demonstrate generalizability by applying the same framework, without domain-specific modification, to seizure detection in EEG recordings. An ensemble achieves 95% TPR with a 16% false positive rate, comparable to recent deep learning approaches with significant reduction in parameter count.
Related papers
- Efficient Memristive Spiking Neural Networks Architecture with Supervised In-Situ STDP Method [0.0]
Memristor-based Spiking Neural Networks (SNNs) with temporal spike encoding enable ultra-low-energy computation.<n>This paper presents a circuit-level memristive spiking neural network (SNN) architecture trained using a proposed novel supervised in-situ learning algorithm.
arXiv Detail & Related papers (2025-07-28T17:09:48Z) - End-to-End Implicit Neural Representations for Classification [57.55927378696826]
Implicit neural representations (INRs) encode a signal in neural network parameters and show excellent results for signal reconstruction.<n>INR-based classification still significantly under-performs compared to pixel-based methods like CNNs.<n>This work presents an end-to-end strategy for initializing SIRENs together with a learned learning-rate scheme.
arXiv Detail & Related papers (2025-03-23T16:02:23Z) - STAL: Spike Threshold Adaptive Learning Encoder for Classification of Pain-Related Biosignal Data [2.0738462952016232]
This paper presents the first application of spiking neural networks (SNNs) for the classification of chronic lower back pain (CLBP) using the EmoPain dataset.
We introduce Spike Threshold Adaptive Learning (STAL), a trainable encoder that effectively converts continuous biosignals into spike trains.
We also propose an ensemble of Spiking Recurrent Neural Network (SRNN) classifiers for the multi-stream processing of sEMG and IMU data.
arXiv Detail & Related papers (2024-07-11T10:15:52Z) - DT-DDNN: A Physical Layer Security Attack Detector in 5G RF Domain for CAVs [10.215216950059874]
jamming attacks pose substantial risks to the 5G network.<n>This work presents a novel deep learning-based technique for detecting jammers in CAV networks.<n>Results show that the proposed method achieves 96.4% detection rate in extra low jamming power.
arXiv Detail & Related papers (2024-03-05T04:29:31Z) - High-speed Low-consumption sEMG-based Transient-state micro-Gesture
Recognition [6.649481653007372]
The accuracy of the proposed SNN is 83.85% and 93.52% on the two datasets respectively.
The methods can be used for precise, high-speed, and low-power micro-gesture recognition tasks.
arXiv Detail & Related papers (2024-03-04T08:59:12Z) - Deep Multi-Scale Representation Learning with Attention for Automatic
Modulation Classification [11.32380278232938]
We find some experienced improvements by using large kernel size for convolutional deep convolution neural network based AMC.
We propose a multi-scale feature network with large kernel size and SE mechanism (SE-MSFN) in this paper.
SE-MSFN achieves state-of-the-art classification performance on the public well-known RADIOML 2018.01A dataset.
arXiv Detail & Related papers (2022-08-31T07:26:09Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Involution: Inverting the Inherence of Convolution for Visual
Recognition [72.88582255910835]
We present a novel atomic operation for deep neural networks by inverting the principles of convolution, coined as involution.
The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition.
Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely.
arXiv Detail & Related papers (2021-03-10T18:40:46Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z) - End-to-End Multi-speaker Speech Recognition with Transformer [88.22355110349933]
We replace the RNN-based encoder-decoder in the speech recognition model with a Transformer architecture.
We also modify the self-attention component to be restricted to a segment rather than the whole sequence in order to reduce computation.
arXiv Detail & Related papers (2020-02-10T16:29:26Z) - Sound Event Detection with Depthwise Separable and Dilated Convolutions [23.104644393058123]
State-of-the-art sound event detection (SED) methods usually employ a series of convolutional neural networks (CNNs) to extract useful features from the input audio signal.
We propose the replacement of the CNNs with depthwise separable convolutions and the replacement of the RNNs with dilated convolutions.
We achieve a reduction of the amount of parameters by 85% and average training time per epoch by 78%, and an increase the average frame-wise F1 score and reduction of the average error rate by 4.6% and 3.8%, respectively.
arXiv Detail & Related papers (2020-02-02T19:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.