ROBIN: A Robust Optical Binary Neural Network Accelerator
- URL: http://arxiv.org/abs/2107.05530v1
- Date: Mon, 12 Jul 2021 16:00:32 GMT
- Title: ROBIN: A Robust Optical Binary Neural Network Accelerator
- Authors: Febin P. Sunny, Asif Mirza, Mahdi Nikdast, Sudeep Pasricha
- Abstract summary: Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance.
We present a novel optical-domain BNN accelerator, named ROBIN, which intelligently integrates heterogeneous microring resonator optical devices.
Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and also many electronic accelerators.
- Score: 3.8137985834223507
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Domain specific neural network accelerators have garnered attention because
of their improved energy efficiency and inference performance compared to CPUs
and GPUs. Such accelerators are thus well suited for resource-constrained
embedded systems. However, mapping sophisticated neural network models on these
accelerators still entails significant energy and memory consumption, along
with high inference time overhead. Binarized neural networks (BNNs), which
utilize single-bit weights, represent an efficient way to implement and deploy
neural network models on accelerators. In this paper, we present a novel
optical-domain BNN accelerator, named ROBIN, which intelligently integrates
heterogeneous microring resonator optical devices with complementary
capabilities to efficiently implement the key functionalities in BNNs. We
perform detailed fabrication-process variation analyses at the optical device
level, explore efficient corrective tuning for these devices, and integrate
circuit-level optimization to counter thermal variations. As a result, our
proposed ROBIN architecture possesses the desirable traits of being robust,
energy-efficient, low latency, and high throughput, when executing BNN models.
Our analysis shows that ROBIN can outperform the best-known optical BNN
accelerators and also many electronic accelerators. Specifically, our
energy-efficient ROBIN design exhibits energy-per-bit values that are ~4x lower
than electronic BNN accelerators and ~933x lower than a recently proposed
photonic BNN accelerator, while a performance-efficient ROBIN design shows ~3x
and ~25x better performance than electronic and photonic BNN accelerators,
respectively.
Related papers
- Spiker+: a framework for the generation of efficient Spiking Neural
Networks FPGA accelerators for inference at the edge [49.42371633618761]
Spiker+ is a framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge.
Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD)
arXiv Detail & Related papers (2024-01-02T10:42:42Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - FireFly v2: Advancing Hardware Support for High-Performance Spiking
Neural Network with a Spatiotemporal FPGA Accelerator [8.0611988136866]
Spiking Neural Networks (SNNs) are expected to be a promising alternative to Artificial Neural Networks (ANNs)
Specialized SNN hardware offers clear advantages over general-purpose devices in terms of power and performance.
FireFly v2, an FPGA SNN accelerator, can address the issue of non-spike operation in current SOTA SNN algorithms.
arXiv Detail & Related papers (2023-09-28T04:17:02Z) - SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices [44.440915387556544]
AQFP devices serve as excellent carriers for binary neural network (BNN) computations.
We propose SupeRBNN, an AQFP-based randomized BNN acceleration framework.
We show that our design achieves an energy efficiency of approximately 7.8x104 times higher than that of the ReRAM-based BNN framework.
arXiv Detail & Related papers (2023-09-21T16:14:42Z) - An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of
Binary Neural Networks [0.0]
We invent a single-MRR-based optical XNOR gate (OXG)
We present a novel design of bitcount circuit which we refer to as Photo-Charge Accumulator (PCA)
Our evaluation for the inference of four modern BNNs indicates that OXBNN provides improvements of up to 62x and 7.6x in frames-per-second (FPS) and FPS/W (energy efficiency)
arXiv Detail & Related papers (2023-02-03T20:56:01Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - O-HAS: Optical Hardware Accelerator Search for Boosting Both
Acceleration Performance and Development Speed [13.41883640945134]
O-HAS consists of two integrated enablers: (1) an O-Cost Predictor, which can accurately yet efficiently predict an optical accelerator's energy and latency based on the DNN model parameters and the optical accelerator design; and (2) an O-Search Engine, which can automatically explore the large design space of optical DNN accelerators.
Experiments and ablation studies consistently validate the effectiveness of both our O-Cost Predictor and O-Search Engine.
arXiv Detail & Related papers (2021-08-17T09:50:14Z) - High-Performance FPGA-based Accelerator for Bayesian Neural Networks [5.86877988129171]
This work proposes a novel FPGA-based hardware architecture to accelerate BNNs inferred through Monte Carlo Dropout.
Compared with other state-of-the-art BNN accelerators, the proposed accelerator can achieve up to 4 times higher energy efficiency and 9 times better compute efficiency.
arXiv Detail & Related papers (2021-05-12T06:20:44Z) - Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators [105.60654479548356]
We show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly.
This leads to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators.
arXiv Detail & Related papers (2021-04-16T19:11:14Z) - Bit Error Robustness for Energy-Efficient DNN Accelerators [93.58572811484022]
We show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors.
This leads to high energy savings from both low-voltage operation as well as low-precision quantization.
arXiv Detail & Related papers (2020-06-24T18:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.