SCONNA: A Stochastic Computing Based Optical Accelerator for Ultra-Fast,
Energy-Efficient Inference of Integer-Quantized CNNs
- URL: http://arxiv.org/abs/2302.07036v1
- Date: Tue, 14 Feb 2023 13:35:15 GMT
- Title: SCONNA: A Stochastic Computing Based Optical Accelerator for Ultra-Fast,
Energy-Efficient Inference of Integer-Quantized CNNs
- Authors: Sairam Sri Vatsavai, Venkata Sai Praneeth Karempudi, Ishan Thakkar,
Ahmad Salehi, and Todd Hastings
- Abstract summary: A CNN inference task uses convolution operations that are typically transformed into vector-dot-product (VDP) operations.
Several photonic microring resonators (MRRs) based hardware architectures have been proposed to accelerate integer-quantized CNNs.
Existing photonic MRR-based analog accelerators exhibit a very strong trade-off between the achievable input/weight precision and VDP operation size.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The acceleration of a CNN inference task uses convolution operations that are
typically transformed into vector-dot-product (VDP) operations. Several
photonic microring resonators (MRRs) based hardware architectures have been
proposed to accelerate integer-quantized CNNs with remarkably higher throughput
and energy efficiency compared to their electronic counterparts. However, the
existing photonic MRR-based analog accelerators exhibit a very strong trade-off
between the achievable input/weight precision and VDP operation size, which
severely restricts their achievable VDP operation size for the quantized
input/weight precision of 4 bits and higher. The restricted VDP operation size
ultimately suppresses computing throughput to severely diminish the achievable
performance benefits. To address this shortcoming, we for the first time
present a merger of stochastic computing and MRR-based CNN accelerators. To
leverage the innate precision flexibility of stochastic computing, we invent an
MRR-based optical stochastic multiplier (OSM). We employ multiple OSMs in a
cascaded manner using dense wavelength division multiplexing, to forge a novel
Stochastic Computing based Optical Neural Network Accelerator (SCONNA). SCONNA
achieves significantly high throughput and energy efficiency for accelerating
inferences of high-precision quantized CNNs. Our evaluation for the inference
of four modern CNNs at 8-bit input/weight precision indicates that SCONNA
provides improvements of up to 66.5x, 90x, and 91x in frames-per-second (FPS),
FPS/W and FPS/W/mm2, respectively, on average over two photonic MRR-based
analog CNN accelerators from prior work, with Top-1 accuracy drop of only up to
0.4% for large CNNs and up to 1.5% for small CNNs. We developed a
transaction-level, event-driven python-based simulator for the evaluation of
SCONNA and other accelerators (https://github.com/uky-UCAT/SC_ONN_SIM.git).
Related papers
- Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - CNN Mixture-of-Depths [4.150676163661315]
Mixture-of-Depths (MoD) for Convolutional Neural Networks (CNNs)
We introduce Mixture-of-Depths (MoD) for Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2024-09-25T15:19:04Z) - An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of
Binary Neural Networks [0.0]
We invent a single-MRR-based optical XNOR gate (OXG)
We present a novel design of bitcount circuit which we refer to as Photo-Charge Accumulator (PCA)
Our evaluation for the inference of four modern BNNs indicates that OXBNN provides improvements of up to 62x and 7.6x in frames-per-second (FPS) and FPS/W (energy efficiency)
arXiv Detail & Related papers (2023-02-03T20:56:01Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Design of High-Throughput Mixed-Precision CNN Accelerators on FPGA [0.0]
Layer-wise mixed-precision quantization allows for more efficient results while inflating the design space.
We present an in-depth quantitative methodology to efficiently explore the design space considering the limited hardware resources of a given FPGA.
Our resulting hardware accelerators implement truly mixed-precision operations that enable efficient execution of layer-wise and channel-wise quantized CNNs.
arXiv Detail & Related papers (2022-08-09T15:32:51Z) - Photonic Reconfigurable Accelerators for Efficient Inference of CNNs
with Mixed-Sized Tensors [0.22843885788439797]
Photonic Microring Resonator (MRR) based hardware accelerators have been shown to provide disruptive speedup and energy-efficiency improvements.
Previous MRR-based CNN accelerators fail to provide efficient adaptability for CNNs with mixed-sized tensors.
We present a novel way of introducing reconfigurability in the MRR-based CNN accelerators.
arXiv Detail & Related papers (2022-07-12T03:18:00Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - OMPQ: Orthogonal Mixed Precision Quantization [64.59700856607017]
Mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization.
We propose to optimize a proxy metric, the concept of networkity, which is highly correlated with the loss of the integer programming.
This approach reduces the search time and required data amount by orders of magnitude, with little compromise on quantization accuracy.
arXiv Detail & Related papers (2021-09-16T10:59:33Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.