Exploring Tradeoffs in Spiking Neural Networks
- URL: http://arxiv.org/abs/2212.09500v2
- Date: Thu, 18 May 2023 08:48:06 GMT
- Title: Exploring Tradeoffs in Spiking Neural Networks
- Authors: Florian Bacho and Dominique Chu
- Abstract summary: Spiking Neural Networks (SNNs) have emerged as a promising alternative to traditional Deep Neural Networks for low-power computing.
We show that relaxing the spike constraint provides higher performance while also benefiting from faster convergence, similar sparsity, comparable prediction latency, and better robustness to noise compared to TTFS SNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Networks (SNNs) have emerged as a promising alternative to
traditional Deep Neural Networks for low-power computing. However, the
effectiveness of SNNs is not solely determined by their performance but also by
their energy consumption, prediction speed, and robustness to noise. The recent
method Fast \& Deep, along with others, achieves fast and energy-efficient
computation by constraining neurons to fire at most once. Known as
Time-To-First-Spike (TTFS), this constraint however restricts the capabilities
of SNNs in many aspects. In this work, we explore the relationships between
performance, energy consumption, speed and stability when using this
constraint. More precisely, we highlight the existence of tradeoffs where
performance and robustness are gained at the cost of sparsity and prediction
latency. To improve these tradeoffs, we propose a relaxed version of Fast \&
Deep that allows for multiple spikes per neuron. Our experiments show that
relaxing the spike constraint provides higher performance while also benefiting
from faster convergence, similar sparsity, comparable prediction latency, and
better robustness to noise compared to TTFS SNNs. By highlighting the
limitations of TTFS and demonstrating the advantages of unconstrained SNNs we
provide valuable insight for the development of effective learning strategies
for neuromorphic computing.
Related papers
- Robust Spiking Neural Networks Against Adversarial Attacks [49.08210314590693]
Spiking Neural Networks (SNNs) represent a promising paradigm for energy-efficient neuromorphic computing.<n>In this study, we theoretically demonstrate that threshold-neighboring spiking neurons are the key factors limiting the robustness of directly trained SNNs.<n>We find that these neurons set the upper limits for the maximum potential strength of adversarial attacks and are prone to state-flipping under minor disturbances.
arXiv Detail & Related papers (2026-02-24T05:06:12Z) - Efficient Eye-based Emotion Recognition via Neural Architecture Search of Time-to-First-Spike-Coded Spiking Neural Networks [52.617096567601344]
Time-to-first-spike (TTFS)-coded spiking neural networks (SNNs) offer a promising solution for eye-based emotion recognition.<n>TTFS-ER is the first neural architecture search framework tailored to TTFS SNNs for eye-based emotion recognition.<n>When deployed on neuromorphic hardware, TNAS-ER attains a low latency of 48 ms and an energy consumption of 0.05 J.
arXiv Detail & Related papers (2025-12-02T06:35:49Z) - Spiking Neural Networks: The Future of Brain-Inspired Computing [0.0]
Spiking Neural Networks (SNNs) represent the latest generation of neural computation.<n>SNNs operate using distinct spike events, making them inherently more energy-efficient and temporally dynamic.<n>This study presents a comprehensive analysis of SNN design models, training algorithms, and multi-dimensional performance metrics.
arXiv Detail & Related papers (2025-10-31T11:14:59Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Inherent Redundancy in Spiking Neural Networks [24.114844269113746]
Spiking Networks (SNNs) are a promising energy-efficient alternative to conventional artificial neural networks.
In this work, we focus on three key questions regarding inherent redundancy in SNNs.
We propose an Advance Attention (ASA) module to harness SNNs' redundancy.
arXiv Detail & Related papers (2023-08-16T08:58:25Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Training Energy-Efficient Deep Spiking Neural Networks with
Time-to-First-Spike Coding [29.131030799324844]
Spiking neural networks (SNNs) mimic the operations in the human brain.
Deep neural networks (DNNs) have become a serious problem in deep learning.
This paper presents training methods for energy-efficient deep SNNs with TTFS coding.
arXiv Detail & Related papers (2021-06-04T16:02:27Z) - Sparse Spiking Gradient Descent [2.741266294612776]
We present the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods.
We show the effectiveness of our method on real datasets of varying complexity.
arXiv Detail & Related papers (2021-05-18T20:00:55Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding [26.654533157221973]
This paper introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the drawback.
According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding.
arXiv Detail & Related papers (2020-03-26T04:39:12Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.