Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation
- URL: http://arxiv.org/abs/2408.07517v1
- Date: Wed, 14 Aug 2024 12:49:58 GMT
- Title: Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation
- Authors: Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein,
- Abstract summary: In this article, we analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof.
We show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series.
- Score: 6.233189707488025
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Efficient implementations of spiking neural networks on neuromorphic hardware promise orders of magnitude less power consumption than their non-spiking counterparts. The standard neuron model for spike-based computation on such neuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron. As a promising advancement, a computationally light augmentation of the LIF neuron model with an adaptation mechanism experienced a recent upswing in popularity, caused by demonstrations of its superior performance on spatio-temporal processing tasks. The root of the superiority of these so-called adaptive LIF neurons however, is not well understood. In this article, we thoroughly analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof. We find that the frequently observed stability problems during training of such networks can be overcome by applying an alternative discretization method that results in provably better stability properties than the commonly used Euler-Forward method. With this discretization, we achieved a new state-of-the-art performance on common event-based benchmark datasets. We also show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series. Our further analysis of the computational properties of networks of adaptive LIF neurons shows that they are particularly well suited to exploit the spatio-temporal structure of input sequences. Furthermore, these networks are surprisingly robust to shifts of the mean input strength and input spike rate, even when these shifts were not observed during training. As a consequence, high-performance networks can be obtained without any normalization techniques such as batch normalization or batch-normalization through time.
Related papers
- Time-independent Spiking Neuron via Membrane Potential Estimation for Efficient Spiking Neural Networks [4.142699381024752]
computational inefficiency of spiking neural networks (SNNs) is primarily due to the sequential updates of membrane potential.
We propose Membrane Potential Estimation Parallel Spiking Neurons (MPE-PSN), a parallel computation method for spiking neurons.
Our approach exhibits promise for enhancing computational efficiency, particularly under conditions of elevated neuron density.
arXiv Detail & Related papers (2024-09-08T05:14:22Z) - Neuroevolving Electronic Dynamical Networks [0.0]
Neuroevolution is a method of applying an evolutionary algorithm to refine the performance of artificial neural networks through natural selection.
Fitness evaluation of continuous time recurrent neural networks (CTRNNs) can be time-consuming and computationally expensive.
Field programmable gate arrays (FPGAs) have emerged as an increasingly popular solution, due to their high performance and low power consumption.
arXiv Detail & Related papers (2024-04-06T10:54:35Z) - Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons [1.7056768055368383]
Spiking neural networks (SNN) are able to learn features while using less energy, especially on neuromorphic hardware.
Most widely used neuron in deep learning is the temporal and Fire (LIF) neuron.
arXiv Detail & Related papers (2023-06-22T04:25:27Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.