High-Performance FPGA-based Accelerator for Bayesian Recurrent Neural
Networks
- URL: http://arxiv.org/abs/2106.06048v1
- Date: Fri, 4 Jun 2021 14:30:39 GMT
- Title: High-Performance FPGA-based Accelerator for Bayesian Recurrent Neural
Networks
- Authors: Martin Ferianc, Zhiqiang Que, Hongxiang Fan, Wayne Luk and Miguel
Rodrigues
- Abstract summary: We propose an FPGA-based hardware design to accelerate Bayesian LSTM-based RNNs.
Compared with GPU implementation, our FPGA-based design can achieve up to 10 times speedup with nearly 106 times higher energy efficiency.
- Score: 2.0631735969348064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have demonstrated their great performance in a wide range of
tasks. Especially in time-series analysis, recurrent architectures based on
long-short term memory (LSTM) cells have manifested excellent capability to
model time dependencies in real-world data. However, standard recurrent
architectures cannot estimate their uncertainty which is essential for
safety-critical applications such as in medicine. In contrast, Bayesian
recurrent neural networks (RNNs) are able to provide uncertainty estimation
with improved accuracy. Nonetheless, Bayesian RNNs are computationally and
memory demanding, which limits their practicality despite their advantages. To
address this issue, we propose an FPGA-based hardware design to accelerate
Bayesian LSTM-based RNNs. To further improve the overall algorithmic-hardware
performance, a co-design framework is proposed to explore the most optimal
algorithmic-hardware configurations for Bayesian RNNs. We conduct extensive
experiments on health-related tasks to demonstrate the improvement of our
design and the effectiveness of our framework. Compared with GPU
implementation, our FPGA-based design can achieve up to 10 times speedup with
nearly 106 times higher energy efficiency. To the best of our knowledge, this
is the first work targeting the acceleration of Bayesian RNNs on FPGAs.
Related papers
- Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA [20.629635991749808]
This paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs.
At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads.
At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient BayesNNs.
arXiv Detail & Related papers (2024-06-20T17:08:42Z) - SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices [44.440915387556544]
AQFP devices serve as excellent carriers for binary neural network (BNN) computations.
We propose SupeRBNN, an AQFP-based randomized BNN acceleration framework.
We show that our design achieves an energy efficiency of approximately 7.8x104 times higher than that of the ReRAM-based BNN framework.
arXiv Detail & Related papers (2023-09-21T16:14:42Z) - When Monte-Carlo Dropout Meets Multi-Exit: Optimizing Bayesian Neural
Networks on FPGA [11.648544516949533]
We propose a novel multi-exit Monte-Carlo Dropout (MCD)-based BayesNN that achieves well-calibrated predictions with low algorithmic complexity.
Our experiments demonstrate that our auto-generated accelerator achieves higher energy efficiency than CPU, GPU, and other state-of-the-art hardware implementations.
arXiv Detail & Related papers (2023-08-13T21:42:31Z) - Exploiting FPGA Capabilities for Accelerated Biomedical Computing [0.0]
This study presents advanced neural network architectures for enhanced ECG signal analysis using Field Programmable Gate Arrays (FPGAs)
We utilize the MIT-BIH Arrhythmia Database for training and validation, introducing Gaussian noise to improve robustness.
The study ultimately offers a guide for optimizing neural network performance on FPGAs for various applications.
arXiv Detail & Related papers (2023-07-16T01:20:17Z) - Reconfigurable Distributed FPGA Cluster Design for Deep Learning
Accelerators [59.11160990637615]
We propose a distributed system based on lowpower embedded FPGAs designed for edge computing applications.
The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
arXiv Detail & Related papers (2023-05-24T16:08:55Z) - End-to-end codesign of Hessian-aware quantized neural networks for FPGAs
and ASICs [49.358119307844035]
We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs)
This makes efficient NN implementations in hardware accessible to nonexperts, in a single open-sourced workflow.
We demonstrate the workflow in a particle physics application involving trigger decisions that must operate at the 40 MHz collision rate of the Large Hadron Collider (LHC)
We implement an optimized mixed-precision NN for high-momentum particle jets in simulated LHC proton-proton collisions.
arXiv Detail & Related papers (2023-04-13T18:00:01Z) - LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy
Physics [45.666822327616046]
This work presents a novel reconfigurable architecture for Low Graph Neural Network (LL-GNN) designs for particle detectors.
The LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.
arXiv Detail & Related papers (2022-09-28T12:55:35Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - High-Performance FPGA-based Accelerator for Bayesian Neural Networks [5.86877988129171]
This work proposes a novel FPGA-based hardware architecture to accelerate BNNs inferred through Monte Carlo Dropout.
Compared with other state-of-the-art BNN accelerators, the proposed accelerator can achieve up to 4 times higher energy efficiency and 9 times better compute efficiency.
arXiv Detail & Related papers (2021-05-12T06:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.