Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural
Networks
- URL: http://arxiv.org/abs/2402.18994v1
- Date: Thu, 29 Feb 2024 09:46:44 GMT
- Title: Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural
Networks
- Authors: Kade M. Heckel and Thomas Nowotny
- Abstract summary: Spiking Neural Networks (SNNs) offer to enhance energy efficiency through a reduced and low-power hardware footprint.
This paper introduces Spyx, a new and lightweight SNN simulation and optimization library designed in JAX.
- Score: 0.08965418284317034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the role of artificial intelligence becomes increasingly pivotal in modern
society, the efficient training and deployment of deep neural networks have
emerged as critical areas of focus. Recent advancements in attention-based
large neural architectures have spurred the development of AI accelerators,
facilitating the training of extensive, multi-billion parameter models. Despite
their effectiveness, these powerful networks often incur high execution costs
in production environments. Neuromorphic computing, inspired by biological
neural processes, offers a promising alternative. By utilizing
temporally-sparse computations, Spiking Neural Networks (SNNs) offer to enhance
energy efficiency through a reduced and low-power hardware footprint. However,
the training of SNNs can be challenging due to their recurrent nature which
cannot as easily leverage the massive parallelism of modern AI accelerators. To
facilitate the investigation of SNN architectures and dynamics researchers have
sought to bridge Python-based deep learning frameworks such as PyTorch or
TensorFlow with custom-implemented compute kernels. This paper introduces Spyx,
a new and lightweight SNN simulation and optimization library designed in JAX.
By pre-staging data in the expansive vRAM of contemporary accelerators and
employing extensive JIT compilation, Spyx allows for SNN optimization to be
executed as a unified, low-level program on NVIDIA GPUs or Google TPUs. This
approach achieves optimal hardware utilization, surpassing the performance of
many existing SNN training frameworks while maintaining considerable
flexibility.
Related papers
- Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion [8.995682796140429]
Spiking Neural Networks (SNNs) emerge as a transformative development in artificial intelligence.
SNNs show promising efficiency on specialized sparse-computational hardware, but their practical training often relies on conventional GPU.
We present a novel temporal fusion method, specifically designed to expedite the propagation dynamics of SNNs on GPU platforms.
arXiv Detail & Related papers (2024-08-01T04:41:56Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks
with Emerging Neural Encoding on FPGAs [6.047137174639418]
End-to-end framework E3NE automates the generation of efficient SNN inference logic for FPGAs.
E3NE uses less than 50% of hardware resources and 20% less power, while reducing the latency by an order of magnitude.
arXiv Detail & Related papers (2021-11-19T04:01:19Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.