An Asynchronous Multi-core Accelerator for SNN inference
- URL: http://arxiv.org/abs/2407.20947v1
- Date: Tue, 30 Jul 2024 16:25:38 GMT
- Title: An Asynchronous Multi-core Accelerator for SNN inference
- Authors: Zhuo Chen, De Ma, Xiaofei Jin, Qinghui Xing, Ouwen Jin, Xin Du, Shuibing He, Gang Pan,
- Abstract summary: Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research.
Our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.
- Score: 26.81434114127108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.
Related papers
- Spiking Neural Networks: The Future of Brain-Inspired Computing [0.0]
Spiking Neural Networks (SNNs) represent the latest generation of neural computation.<n>SNNs operate using distinct spike events, making them inherently more energy-efficient and temporally dynamic.<n>This study presents a comprehensive analysis of SNN design models, training algorithms, and multi-dimensional performance metrics.
arXiv Detail & Related papers (2025-10-31T11:14:59Z) - One-Timestep is Enough: Achieving High-performance ANN-to-SNN Conversion via Scale-and-Fire Neurons [7.289542889212981]
Spiking Neural Networks (SNNs) are energy-efficient alternatives to Artificial Neural Networks (ANNs)<n>We propose a theoretical and practical framework for single-timestep ANN2SNN.<n>We achieve 88.8% top-1 accuracy on ImageNet-1K at $T=1$, surpassing existing conversion methods.
arXiv Detail & Related papers (2025-10-27T14:35:14Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - SpikeX: Exploring Accelerator Architecture and Network-Hardware Co-Optimization for Sparse Spiking Neural Networks [3.758294848902233]
We propose a novel systolic-array SNN accelerator architecture, called SpikeX, to take on the challenges and opportunities stemming from unstructured sparsity.<n>SpikeX reduces memory access and increases data sharing and hardware utilization targeting computations spanning both time and space.
arXiv Detail & Related papers (2025-05-18T08:07:44Z) - STAA-SNN: Spatial-Temporal Attention Aggregator for Spiking Neural Networks [17.328954271272742]
Spiking Neural Networks (SNNs) have gained significant attention due to their biological plausibility and energy efficiency.
However, the performance gap between SNNs and Artificial Neural Networks (ANNs) remains a substantial challenge hindering the widespread adoption of SNNs.
We propose a Spatial-Temporal Attention Aggregator SNN framework, which dynamically focuses on and captures both spatial and temporal dependencies.
arXiv Detail & Related papers (2025-03-04T15:02:32Z) - Efficient Logit-based Knowledge Distillation of Deep Spiking Neural Networks for Full-Range Timestep Deployment [10.026742974971189]
Spiking Neural Networks (SNNs) are emerging as a brain-inspired alternative to traditional Artificial Neural Networks (ANNs)<n>Despite this, SNNs often suffer from accuracy compared to ANNs and face deployment challenges due to inference timesteps.<n>We propose a novel distillation framework for deep SNNs that optimize performance across full-range timesteps without specific retraining.
arXiv Detail & Related papers (2025-01-27T10:22:38Z) - Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks [0.11522790873450185]
A truly asynchronous system would allow all neurons to evaluate concurrently their threshold and emit spikes upon receiving any presynaptic current.
We present a study that documents and quantifies this problem in three datasets on our simulation environment that implements network asynchrony.
We show that models trained with layer synchronization either perform sub-optimally in absence of the synchronization, or they will fail to benefit from any energy and latency reduction.
arXiv Detail & Related papers (2024-08-09T14:39:23Z) - Graph Neural Networks Gone Hogwild [14.665528337423249]
Graph neural networks (GNNs) appear to be powerful tools to learn state representations for agents in distributed, decentralized multi-agent systems.<n>GNNs generate catastrophically incorrect predictions when nodes update asynchronously during inference.<n>We identify "implicitly-defined" GNNs as a class of architectures which is provably robust to asynchronous "hogwild" inference.<n>We propose a novel implicitly-defined GNN architecture, which we call an 'energy GNN'
arXiv Detail & Related papers (2024-06-29T17:11:09Z) - Training a General Spiking Neural Network with Improved Efficiency and
Minimum Latency [4.503744528661997]
Spiking Neural Networks (SNNs) operate in an event-driven manner and employ binary spike representation.
This paper proposes a general training framework that enhances feature learning and activation efficiency within a limited time step.
arXiv Detail & Related papers (2024-01-05T09:54:44Z) - Best of Both Worlds: Hybrid SNN-ANN Architecture for Event-based Optical Flow Estimation [12.611797572621398]
Spiking Neural Networks (SNNs) with their asynchronous event-driven compute show great potential for extracting features from event streams.
We propose a novel SNN-ANN hybrid architecture that combines the strengths of both.
arXiv Detail & Related papers (2023-06-05T15:26:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.