AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs
and ASICs
- URL: http://arxiv.org/abs/2001.03535v4
- Date: Wed, 10 Jun 2020 23:50:57 GMT
- Title: AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs
and ASICs
- Authors: Pengfei Xu, Xiaofan Zhang, Cong Hao, Yang Zhao, Yongan Zhang, Yue
Wang, Chaojian Li, Zetong Guan, Deming Chen, Yingyan Lin
- Abstract summary: AutoDNNchip is a chip generator that can automatically generate both FPGA- and ASIC-based DNN chip implementation for a designated application and dataset.
Our Chip Predictor's predicted performance differs from real-measured ones by 10% when validated.
accelerators generated by our AutoDNNchip can achieve better (up to 3.86X improvement) performance than that of expert-crafted state-of-the-art accelerators.
- Score: 36.490296335959485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent breakthroughs in Deep Neural Networks (DNNs) have fueled a growing
demand for DNN chips. However, designing DNN chips is non-trivial because: (1)
mainstream DNNs have millions of parameters and operations; (2) the large
design space due to the numerous design choices of dataflows, processing
elements, memory hierarchy, etc.; and (3) an algorithm/hardware co-design is
needed to allow the same DNN functionality to have a different decomposition,
which would require different hardware IPs to meet the application
specifications. Therefore, DNN chips take a long time to design and require
cross-disciplinary experts. To enable fast and effective DNN chip design, we
propose AutoDNNchip - a DNN chip generator that can automatically generate both
FPGA- and ASIC-based DNN chip implementation given DNNs from machine learning
frameworks (e.g., PyTorch) for a designated application and dataset.
Specifically, AutoDNNchip consists of two integrated enablers: (1) a Chip
Predictor, built on top of a graph-based accelerator representation, which can
accurately and efficiently predict a DNN accelerator's energy, throughput, and
area based on the DNN model parameters, hardware configuration,
technology-based IPs, and platform constraints; and (2) a Chip Builder, which
can automatically explore the design space of DNN chips (including IP
selection, block configuration, resource balancing, etc.), optimize chip design
via the Chip Predictor, and then generate optimized synthesizable RTL to
achieve the target design metrics. Experimental results show that our Chip
Predictor's predicted performance differs from real-measured ones by < 10% when
validated using 15 DNN models and 4 platforms (edge-FPGA/TPU/GPU and ASIC).
Furthermore, accelerators generated by our AutoDNNchip can achieve better (up
to 3.86X improvement) performance than that of expert-crafted state-of-the-art
accelerators.
Related papers
- FireFly v2: Advancing Hardware Support for High-Performance Spiking
Neural Network with a Spatiotemporal FPGA Accelerator [8.0611988136866]
Spiking Neural Networks (SNNs) are expected to be a promising alternative to Artificial Neural Networks (ANNs)
Specialized SNN hardware offers clear advantages over general-purpose devices in terms of power and performance.
FireFly v2, an FPGA SNN accelerator, can address the issue of non-spike operation in current SOTA SNN algorithms.
arXiv Detail & Related papers (2023-09-28T04:17:02Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - SECDA: Efficient Hardware/Software Co-Design of FPGA-based DNN
Accelerators for Edge Inference [0.0]
We propose SECDA, a new hardware/software co-design methodology to reduce design time of optimized Deep Neural Networks (DNN) inference accelerators on edge devices with FPGAs.
We use SECDA to efficiently develop two different DNN accelerator designs on a PYNQ-Z1 board, a platform that includes an edge FPGA.
We evaluate the two accelerator designs with four common DNN models, achieving an average performance speedup across models of up to 3.5$times$ with a 2.9$times$ reduction in energy consumption over CPU-only inference.
arXiv Detail & Related papers (2021-10-01T15:20:29Z) - H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking
Neural Networks [25.768116231283045]
We propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning.
Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.
arXiv Detail & Related papers (2021-07-25T07:37:17Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - DNA: Differentiable Network-Accelerator Co-Search [36.68587348474986]
We propose DNA, a Differentiable Network-Accelerator co-search framework for automatically searching for matched networks and accelerators.
Specifically, DNA integrates two enablers: (1) a generic design space for DNN accelerators and compatible with DNN frameworks such as PyTorch to enable algorithmic exploration.
Experiments and ablation studies show that the matched networks and accelerators generated by DNA consistently outperform state-of-the-art (SOTA) DNNs and accelerators.
arXiv Detail & Related papers (2020-10-28T05:57:16Z) - SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
Computation [97.78417228445883]
We present SmartExchange, an algorithm- hardware co-design framework for energy-efficient inference of deep neural networks (DNNs)
We develop a novel algorithm to enforce a specially favorable DNN weight structure, where each layerwise weight matrix can be stored as the product of a small basis matrix and a large sparse coefficient matrix whose non-zero elements are all power-of-2.
We further design a dedicated accelerator to fully utilize the SmartExchange-enforced weights to improve both energy efficiency and latency performance.
arXiv Detail & Related papers (2020-05-07T12:12:49Z) - DNN-Chip Predictor: An Analytical Performance Predictor for DNN
Accelerators with Various Dataflows and Hardware Architectures [30.689015188050405]
The recent breakthroughs in deep neural networks (DNNs) have spurred a tremendously increased demand for DNN accelerators.
DNN-Chip Predictor is an analytical performance predictor which can accurately predict DNN accelerators' energy, throughput, and latency prior to their actual implementation.
arXiv Detail & Related papers (2020-02-26T02:59:18Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.