Hardware/Software co-design with ADC-Less In-memory Computing Hardware
for Spiking Neural Networks
- URL: http://arxiv.org/abs/2211.02167v2
- Date: Sun, 4 Jun 2023 23:24:36 GMT
- Title: Hardware/Software co-design with ADC-Less In-memory Computing Hardware
for Spiking Neural Networks
- Authors: Marco Paul E. Apolinario, Adarsh Kumar Kosta, Utkarsh Saxena, Kaushik
Roy
- Abstract summary: Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices.
We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs.
Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks.
- Score: 4.7519630770389405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are bio-plausible models that hold great
potential for realizing energy-efficient implementations of sequential tasks on
resource-constrained edge devices. However, commercial edge platforms based on
standard GPUs are not optimized to deploy SNNs, resulting in high energy and
latency. While analog In-Memory Computing (IMC) platforms can serve as
energy-efficient inference engines, they are accursed by the immense energy,
latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing
the benefits of in-memory computations. We propose a hardware/software
co-design methodology to deploy SNNs into an ADC-Less IMC architecture using
sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating
the above issues. Our proposed framework incurs minimal accuracy degradation by
performing hardware-aware training and is able to scale beyond simple image
classification tasks to more complex sequential regression tasks. Experiments
on complex tasks of optical flow estimation and gesture recognition show that
progressively increasing the hardware awareness during SNN training allows the
model to adapt and learn the errors due to the non-idealities associated with
ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and
latency improvements, $2-7\times$ and $8.9-24.6\times$, respectively, depending
on the SNN model and the workload, compared to HP-ADC IMC.
Related papers
- Approximate ADCs for In-Memory Computing [5.1793930906065775]
In memory computing (IMC) architectures for deep learning (DL) accelerators leverage energy-efficient and highly parallel matrix vector multiplication (MVM) operations.
Recently reported designs reveal that the ADCs required for reading out the MVM results, consume more than 85% of the total compute power and also dominate the area.
In this work we present peripheral aware design of IMC cores, to mitigate such overheads.
arXiv Detail & Related papers (2024-08-11T05:59:59Z) - StoX-Net: Stochastic Processing of Partial Sums for Efficient In-Memory Computing DNN Accelerators [5.245727758971415]
Crossbarsoftware-based in-memory computing (IMC) has emerged as a promising platform for hardware acceleration of deep neural networks (DNNs)
However, the energy and latency of IMC systems are dominated by the large overhead of the peripheral analog-to-digital converters (ADCs)
arXiv Detail & Related papers (2024-07-17T07:56:43Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - AnalogNAS: A Neural Network Design Framework for Accurate Inference with
Analog In-Memory Computing [7.596833322764203]
Inference at the edge requires low latency, compact and power-efficient models.
analog/mixed signal in-memory computing hardware accelerators can easily transcend the memory wall of von Neuman architectures.
We propose AnalogNAS, a framework for automated Deep Neural Network (DNN) design targeting deployment on analog In-Memory Computing (IMC) inference accelerators.
arXiv Detail & Related papers (2023-05-17T07:39:14Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End
Inference of Real-World Deep Neural Networks [12.361842554233558]
Deployment of modern TinyML tasks on small battery-constrained IoT devices requires high computational energy efficiency.
Analog In-Memory Computing (IMC) using non-volatile memory (NVM) promises major efficiency improvements in deep neural network (DNN) inference.
We present a heterogeneous tightly-coupled architecture integrating 8 RISC-V cores, an in-memory computing accelerator (IMA), and digital accelerators.
arXiv Detail & Related papers (2022-01-04T11:12:01Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.