FPGA deep learning acceleration based on convolutional neural network
- URL: http://arxiv.org/abs/2012.03672v1
- Date: Tue, 17 Nov 2020 16:20:44 GMT
- Title: FPGA deep learning acceleration based on convolutional neural network
- Authors: Xiong Jun
- Abstract summary: This paper proposes a convolutional neural network hardware accelerator based on field programmable logic gate array (FPGA)
The energy efficiency ratio of the accelerator proposed in this article reaches 32.73 GOPS/W, which is 34% higher than the existing solution, and the performance reaches 317.86 GOPS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In view of the large amount of calculation and long calculation time of
convolutional neural network (CNN), this paper proposes a convolutional neural
network hardware accelerator based on field programmable logic gate array
(FPGA). First, through in-depth analysis of the forward operation principle of
the convolutional layer and exploration of the parallelism of the convolutional
layer operation, a hardware architecture of input channel parallelism, output
channel parallelism and convolution window deep pipeline is designed. Then in
the above architecture, a fully parallel multiplication-addition tree module is
designed to accelerate the convolution operation and an efficient window buffer
module to implement the pipeline operation of the convolution window. The final
experimental results show that the energy efficiency ratio of the accelerator
proposed in this article reaches 32.73 GOPS/W, which is 34% higher than the
existing solution, and the performance reaches 317.86 GOPS.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - An FPGA-Based Accelerator Enabling Efficient Support for CNNs with
Arbitrary Kernel Sizes [11.681245043617848]
Convolutional neural networks (CNNs) with large kernels have demonstrated impressive performance in various vision-based applications.
An FPGA-based inference accelerator is proposed for the efficient deployment of CNNs with arbitrary kernel sizes.
The proposed hardware accelerator, evaluated on Intel Arria 10 FPGA, achieves up to 3.91 times better DSP efficiency than prior art on the same network.
arXiv Detail & Related papers (2024-02-22T05:52:55Z) - DeepPCR: Parallelizing Sequential Operations in Neural Networks [4.241834259165193]
We introduce DeepPCR, a novel algorithm which parallelizes typically sequential operations in order to speed up inference and training of neural networks.
DeepPCR is based on interpreting a sequence of $L$ steps as the solution of a specific system of equations, which we recover using the Parallel Cyclic Reduction algorithm.
To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons.
arXiv Detail & Related papers (2023-09-28T10:15:30Z) - Reconfigurable Distributed FPGA Cluster Design for Deep Learning
Accelerators [59.11160990637615]
We propose a distributed system based on lowpower embedded FPGAs designed for edge computing applications.
The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
arXiv Detail & Related papers (2023-05-24T16:08:55Z) - Lightweight and Progressively-Scalable Networks for Semantic
Segmentation [100.63114424262234]
Multi-scale learning frameworks have been regarded as a capable class of models to boost semantic segmentation.
In this paper, we thoroughly analyze the design of convolutional blocks and the ways of interactions across multiple scales.
We devise Lightweight and Progressively-Scalable Networks (LPS-Net) that novelly expands the network complexity in a greedy manner.
arXiv Detail & Related papers (2022-07-27T16:00:28Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training [9.551339069298011]
BaPipe is a pipeline parallelism training framework for distributed deep learning.
It automatically explores pipeline parallelism training methods and balanced partition strategies for distributed training.
BaPipe provides up to 3.2x speedup and 4x memory reduction in various platforms.
arXiv Detail & Related papers (2020-12-23T08:57:39Z) - NOMA in UAV-aided cellular offloading: A machine learning approach [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T17:38:48Z) - Accelerating Deep Neuroevolution on Distributed FPGAs for Reinforcement
Learning Problems [0.7734726150561088]
We report record training times (running at about 1 million frames per second) for Atari 2600 games using deep neuroevolution implemented on distributed FPGAs.
Results are the first application demonstration on the IBM Neural Computer.
arXiv Detail & Related papers (2020-05-10T00:41:39Z) - Minimal Filtering Algorithms for Convolutional Neural Networks [82.24592140096622]
We develop fully parallel hardware-oriented algorithms for implementing the basic filtering operation for M=3,5,7,9, and 11.
A fully parallel hardware implementation of the proposed algorithms in each case gives approximately 30 percent savings in the number of embedded multipliers.
arXiv Detail & Related papers (2020-04-12T13:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.