Efficient Uncertainty Estimation in Spiking Neural Networks via
MC-dropout
- URL: http://arxiv.org/abs/2304.10191v1
- Date: Thu, 20 Apr 2023 10:05:57 GMT
- Title: Efficient Uncertainty Estimation in Spiking Neural Networks via
MC-dropout
- Authors: Tao Sun, Bojian Yin, Sander Bohte
- Abstract summary: Spiking neural networks (SNNs) have gained attention as models of sparse and event-driven communication of biological neurons.
We propose an efficient Monte Carlo(MC)-dropout based approach for uncertainty estimation in SNNs.
- Score: 3.692069129522824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs) have gained attention as models of sparse and
event-driven communication of biological neurons, and as such have shown
increasing promise for energy-efficient applications in neuromorphic hardware.
As with classical artificial neural networks (ANNs), predictive uncertainties
are important for decision making in high-stakes applications, such as
autonomous vehicles, medical diagnosis, and high frequency trading. Yet,
discussion of uncertainty estimation in SNNs is limited, and approaches for
uncertainty estimation in artificial neural networks (ANNs) are not directly
applicable to SNNs. Here, we propose an efficient Monte Carlo(MC)-dropout based
approach for uncertainty estimation in SNNs. Our approach exploits the
time-step mechanism of SNNs to enable MC-dropout in a computationally efficient
manner, without introducing significant overheads during training and inference
while demonstrating high accuracy and uncertainty quality.
Related papers
- Uncertainty Quantification in Working Memory via Moment Neural Networks [8.064442892805843]
Humans possess a finely tuned sense of uncertainty that helps anticipate potential errors.
This study applies moment neural networks to explore the neural mechanism of uncertainty quantification in working memory.
arXiv Detail & Related papers (2024-11-21T15:05:04Z) - Training Spiking Neural Networks via Augmented Direct Feedback Alignment [3.798885293742468]
Spiking neural networks (SNNs) are promising solutions for implementing neural networks in neuromorphic devices.
However, the nondifferentiable nature of SNN neurons makes it a challenge to train them.
In this paper, we propose using augmented direct feedback alignment (aDFA), a gradient-free approach based on random projection, to train SNNs.
arXiv Detail & Related papers (2024-09-12T06:22:44Z) - BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - Efficient and Effective Time-Series Forecasting with Spiking Neural Networks [47.371024581669516]
Spiking neural networks (SNNs) provide a unique pathway for capturing the intricacies of temporal data.
Applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection.
We propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information.
arXiv Detail & Related papers (2024-02-02T16:23:50Z) - Inherent Redundancy in Spiking Neural Networks [24.114844269113746]
Spiking Networks (SNNs) are a promising energy-efficient alternative to conventional artificial neural networks.
In this work, we focus on three key questions regarding inherent redundancy in SNNs.
We propose an Advance Attention (ASA) module to harness SNNs' redundancy.
arXiv Detail & Related papers (2023-08-16T08:58:25Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.