Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design
- URL: http://arxiv.org/abs/2303.15224v1
- Date: Mon, 27 Mar 2023 14:03:11 GMT
- Title: Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design
- Authors: Guangzhi Tang, Ali Safa, Kevin Shidqi, Paul Detterer, Stefano
Traferro, Mario Konijnenburg, Manolis Sifalakis, Gert-Jan van Schaik,
Amirreza Yousefzadeh
- Abstract summary: We present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms.
We show the energy efficiency of SNN algorithms for video processing and online learning.
- Score: 0.08431877864777441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse and event-driven spiking neural network (SNN) algorithms are the ideal
candidate solution for energy-efficient edge computing. Yet, with the growing
complexity of SNN algorithms, it isn't easy to properly benchmark and optimize
their computational cost without hardware in the loop. Although digital
neuromorphic processors have been widely adopted to benchmark SNN algorithms,
their black-box nature is problematic for algorithm-hardware co-optimization.
In this work, we open the black box of the digital neuromorphic processor for
algorithm designers by presenting the neuron processing instruction set and
detailed energy consumption of the SENeCA neuromorphic architecture. For
convenient benchmarking and optimization, we provide the energy cost of the
essential neuromorphic components in SENeCA, including neuron models and
learning rules. Moreover, we exploit the SENeCA's hierarchical memory and
exhibit an advantage over existing neuromorphic processors. We show the energy
efficiency of SNN algorithms for video processing and online learning, and
demonstrate the potential of our work for optimizing algorithm designs.
Overall, we present a practical approach to enable algorithm designers to
accurately benchmark SNN algorithms and pave the way towards effective
algorithm-hardware co-design.
Related papers
- NAR-*ICP: Neural Execution of Classical ICP-based Pointcloud Registration Algorithms [7.542220697870245]
This study explores the intersection of neural networks and classical robotics algorithms through the Neural Algorithmic Reasoning framework.
We propose a Graph Neural Network (GNN)-based learning framework, NAR-*ICP, which learns the intermediate algorithmic steps of classical ICP-based pointcloud registration algorithms.
We evaluate our approach across diverse datasets, from real-world to synthetic, demonstrating its flexibility in handling complex and noisy inputs.
arXiv Detail & Related papers (2024-10-14T19:33:46Z) - Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking
Neural networks: from Algorithms to Technology [11.479629320025673]
spiking neural networks (SNNs) have become an attractive alternative to deep neural networks for a broad range of signal processing applications.
We describe advances in algorithmic and optimization innovations to efficiently train and scale low-latency, and energy-efficient SNNs.
We discuss the potential path forward for research in building deployable SNN systems.
arXiv Detail & Related papers (2023-12-02T19:47:00Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware [0.44446524844395807]
We propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time.
Our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
arXiv Detail & Related papers (2020-06-11T19:56:55Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design [2.6872737601772956]
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes.
We propose a novel supervised learning algorithm for SNNs based on temporal coding.
arXiv Detail & Related papers (2020-01-08T03:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.