Eventprop training for efficient neuromorphic applications
- URL: http://arxiv.org/abs/2503.04341v1
- Date: Thu, 06 Mar 2025 11:38:46 GMT
- Title: Eventprop training for efficient neuromorphic applications
- Authors: Thomas Shoesmith, James C. Knight, Balázs Mészáros, Jonathan Timcheck, Thomas Nowotny,
- Abstract summary: We present a pipeline for training spiking neural networks on GPUs, using the efficient event-driven Eventprop algorithm implemented in mlGeNN.<n>Our benchmarking on keyword spotting tasks indicates that there is almost no loss in accuracy between GPU and Loihi 2 implementations.<n> classifying a sample on Loihi 2 is up to 10X faster and uses 200X less energy than on an NVIDIA Jetson Orin Nano.
- Score: 0.9138341348704225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing can reduce the energy requirements of neural networks and holds the promise to `repatriate' AI workloads back from the cloud to the edge. However, training neural networks on neuromorphic hardware has remained elusive. Here, we instead present a pipeline for training spiking neural networks on GPUs, using the efficient event-driven Eventprop algorithm implemented in mlGeNN, and deploying them on Intel's Loihi 2 neuromorphic chip. Our benchmarking on keyword spotting tasks indicates that there is almost no loss in accuracy between GPU and Loihi 2 implementations and that classifying a sample on Loihi 2 is up to 10X faster and uses 200X less energy than on an NVIDIA Jetson Orin Nano.
Related papers
- A Complete Pipeline for deploying SNNs with Synaptic Delays on Loihi 2 [3.1563988360892505]
Spiking Neural Networks are attracting increased attention as a more energy-efficient alternative to traditional Artificial Neural Networks for edge computing.<n>We present a complete pipeline: efficient event-based training of SNNs with synaptic delays on GPU and deployment on Intel's Loihi 2 neuromorphic chip.
arXiv Detail & Related papers (2025-10-15T17:05:55Z) - Hardware-Aware Fine-Tuning of Spiking Q-Networks on the SpiNNaker2 Neuromorphic Platform [1.210742213461011]
Spiking Neural Networks (SNNs) promise orders-of-latency lower power consumption and low-magnitude inference on neuromorphic hardware for a wide range of robotic tasks.<n>We present an energy-efficient implementation of a reinforcement learning (RL) algorithm using quantized SNNs to solve two classical control tasks.<n>The network is trained using the Q-learning algorithm, then fine-tuned and quantized to low-bit (8-bit) precision for embedded deployment on the SpiNNaker2 neuromorphic chip.
arXiv Detail & Related papers (2025-07-31T13:49:44Z) - Sigma-Delta Neural Network Conversion on Loihi 2 [2.2718043506526873]
We use Loihi 2's graded spikes to develop a method for converting ANN networks to spiking networks.<n>We evaluate the performance of this network on Loihi 2 and compared it to NVIDIA's Jetson Xavier edge AI platform.
arXiv Detail & Related papers (2025-05-09T20:37:27Z) - Event-based backpropagation on the neuromorphic platform SpiNNaker2 [1.0597501054401728]
EventProp is an algorithm for event-based backpropagation in spiking neural networks (SNNs)<n>Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints.<n>We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset.
arXiv Detail & Related papers (2024-12-19T16:31:42Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks [0.1350479308585481]
Eventprop is an algorithm for gradient descent on exact gradients in spiking neural networks.
We implement Eventprop in the GPU-enhanced Neural Networks framework.
We train spiking neural networks on Spiking Heidelberg Digits and Spiking Speech Commands datasets.
arXiv Detail & Related papers (2022-12-02T15:20:58Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - A Spiking Neural Network for Image Segmentation [3.4998703934432682]
We convert the deep Artificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network (SNN) architecture using the Nengo framework.
Both rate-based and spike-based models are trained and optimized for benchmarking performance and power.
The neuromorphic implementation on the Intel Loihi neuromorphic chip is over 2x more energy-efficient than conventional hardware.
arXiv Detail & Related papers (2021-06-16T16:23:18Z) - ActNN: Reducing Training Memory Footprint via 2-Bit Activation
Compressed Training [68.63354877166756]
ActNN is a memory-efficient training framework that stores randomly quantized activations for back propagation.
ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
arXiv Detail & Related papers (2021-04-29T05:50:54Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.