Towards Chip-in-the-loop Spiking Neural Network Training via
Metropolis-Hastings Sampling
- URL: http://arxiv.org/abs/2402.06284v1
- Date: Fri, 9 Feb 2024 09:49:05 GMT
- Title: Towards Chip-in-the-loop Spiking Neural Network Training via
Metropolis-Hastings Sampling
- Authors: Ali Safa, Vikrant Jaltare, Samira Sebt, Kameron Gano, Johannes
Leugering, Georges Gielen, Gert Cauwenberghs
- Abstract summary: This paper studies the use of Metropolis-Hastings sampling for training Spiking Neural Network (SNN) hardware subject to strong unknown non-idealities.
Our results show that the proposed approach strongly outperforms the use of backprop by up to $27%$ higher accuracy when subject to strong hardware non-idealities.
- Score: 0.9025833922570009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the use of Metropolis-Hastings sampling for training
Spiking Neural Network (SNN) hardware subject to strong unknown non-idealities,
and compares the proposed approach to the common use of the backpropagation of
error (backprop) algorithm and surrogate gradients, widely used to train SNNs
in literature. Simulations are conducted within a chip-in-the-loop training
context, where an SNN subject to unknown distortion must be trained to detect
cancer from measurements, within a biomedical application context. Our results
show that the proposed approach strongly outperforms the use of backprop by up
to $27\%$ higher accuracy when subject to strong hardware non-idealities.
Furthermore, our results also show that the proposed approach outperforms
backprop in terms of SNN generalization, needing $>10 \times$ less training
data for achieving effective accuracy. These findings make the proposed
training approach well-suited for SNN implementations in analog subthreshold
circuits and other emerging technologies where unknown hardware non-idealities
can jeopardize backprop.
Related papers
- Exploring the Potential of Spiking Neural Networks in UWB Channel Estimation [7.52520480528178]
This letter explores the potential of Spiking Neural Networks (SNNs) for channel estimation.<n>We develop a fully unsupervised SNN solution that attains 80% test accuracy.<n>Compared with complex deep learning methods, our SNN implementation is inherently suited to neuromorphic deployment.
arXiv Detail & Related papers (2025-12-30T04:10:18Z) - S$^2$NN: Sub-bit Spiking Neural Networks [53.08060832135342]
Spiking Neural Networks (SNNs) offer an energy-efficient paradigm for machine intelligence.<n>Despite recent advances in binary SNNs, the storage and computational demands remain substantial for large-scale networks.<n>We propose Sub-bit Spiking Neural Networks (S$2$NNs) that represent weights with less than one bit.
arXiv Detail & Related papers (2025-09-29T04:17:44Z) - A Self-Ensemble Inspired Approach for Effective Training of Binary-Weight Spiking Neural Networks [66.80058515743468]
Training Spiking Neural Networks (SNNs) and Binary Neural Networks (BNNs) is challenging because of the non-differentiable spike generation function.<n>We present a novel perspective on the dynamics of SNNs and their close connection to BNNs through an analysis of the backpropagation process.<n>Specifically, we leverage a structure of multiple shortcuts and a knowledge distillation-based training technique to improve the training of (binary-weight) SNNs.
arXiv Detail & Related papers (2025-08-18T04:11:06Z) - Evidential Uncertainty Probes for Graph Neural Networks [3.5169632430086315]
We propose a plug-and-play framework for uncertainty quantification in Graph Neural Networks (GNNs)
Our Evidential Probing Network (EPN) uses a lightweight Multi-Layer-Perceptron (MLP) head to extract evidence from learned representations.
EPN-reg achieves state-of-the-art performance in accurate and efficient uncertainty quantification, making it suitable for real-world deployment.
arXiv Detail & Related papers (2025-03-11T07:00:54Z) - Backpropagation-free Spiking Neural Networks with the Forward-Forward Algorithm [0.13499500088995461]
Spiking Neural Networks (SNNs) offer a biologically inspired computational paradigm that emulates neuronal activity through discrete spike-based processing.<n>Despite their advantages, training SNNs with traditional backpropagation (BP) remains challenging due to computational inefficiencies and a lack of biological plausibility.<n>This study explores the Forward-Forward (FF) algorithm as an alternative learning framework for SNNs.
arXiv Detail & Related papers (2025-02-19T12:44:26Z) - Training Spiking Neural Networks via Augmented Direct Feedback Alignment [3.798885293742468]
Spiking neural networks (SNNs) are promising solutions for implementing neural networks in neuromorphic devices.
However, the nondifferentiable nature of SNN neurons makes it a challenge to train them.
In this paper, we propose using augmented direct feedback alignment (aDFA), a gradient-free approach based on random projection, to train SNNs.
arXiv Detail & Related papers (2024-09-12T06:22:44Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Efficient Bayes Inference in Neural Networks through Adaptive Importance
Sampling [19.518237361775533]
In BNNs, a complete posterior distribution of the unknown weight and bias parameters of the network is produced during the training stage.
This feature is useful in countless machine learning applications.
It is particularly appealing in areas where decision-making has a crucial impact, such as medical healthcare or autonomous driving.
arXiv Detail & Related papers (2022-10-03T14:59:23Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
Neural Networks [32.629801680158685]
Sketching Curvature of OoD Detection (SCOD) is an architecture-agnostic framework for equipping trained Deep Neural Networks with task-relevant uncertainty estimates.
We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.
arXiv Detail & Related papers (2021-02-24T21:34:40Z) - Selfish Sparse RNN Training [13.165729746380816]
We propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance.
We achieve state-of-the-art sparse training results with various datasets on Penn TreeBank and Wikitext-2.
arXiv Detail & Related papers (2021-01-22T10:45:40Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.