MetaML: Automating Customizable Cross-Stage Design-Flow for Deep
Learning Acceleration
- URL: http://arxiv.org/abs/2306.08746v1
- Date: Wed, 14 Jun 2023 21:06:07 GMT
- Title: MetaML: Automating Customizable Cross-Stage Design-Flow for Deep
Learning Acceleration
- Authors: Zhiqiang Que, Shuo Liu, Markus Rognlien, Ce Guo, Jose G. F. Coutinho,
Wayne Luk
- Abstract summary: This paper introduces a novel optimization framework for deep neural network (DNN) hardware accelerators.
We introduce novel optimization and transformation tasks for building design-flow architectures.
Our results demonstrate considerable reductions of up to 92% in DSP usage and 89% in LUT usage for two networks.
- Score: 5.2487252195308844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel optimization framework for deep neural network
(DNN) hardware accelerators, enabling the rapid development of customized and
automated design flows. More specifically, our approach aims to automate the
selection and configuration of low-level optimization techniques, encompassing
DNN and FPGA low-level optimizations. We introduce novel optimization and
transformation tasks for building design-flow architectures, which are highly
customizable and flexible, thereby enhancing the performance and efficiency of
DNN accelerators. Our results demonstrate considerable reductions of up to 92\%
in DSP usage and 89\% in LUT usage for two networks, while maintaining accuracy
and eliminating the need for human effort or domain expertise. In comparison to
state-of-the-art approaches, our design achieves higher accuracy and utilizes
three times fewer DSP resources, underscoring the advantages of our proposed
framework.
Related papers
- DCP: Learning Accelerator Dataflow for Neural Network via Propagation [52.06154296196845]
This work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort.
DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives.
For example, without using additional training data, DCP surpasses the GAMMA method that performs a full search using thousands of samples.
arXiv Detail & Related papers (2024-10-09T05:16:44Z) - Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - ARCO:Adaptive Multi-Agent Reinforcement Learning-Based Hardware/Software Co-Optimization Compiler for Improved Performance in DNN Accelerator Design [4.825037489691159]
ARCO is an adaptive Multi-Agent Reinforcement Learning (MARL)-based co-optimizing compilation framework.
The framework incorporates three specialized actor-critic agents within MARL, each dedicated to a distinct aspect of compilation/optimization.
arXiv Detail & Related papers (2024-07-11T05:22:04Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - PLiNIO: A User-Friendly Library of Gradient-based Methods for
Complexity-aware DNN Optimization [3.460496851517031]
PLiNIO is an open-source library implementing a comprehensive set of state-of-the-art DNN design automation techniques.
We show that PLiNIO achieves up to 94.34% memory reduction for a 1% accuracy drop compared to a baseline architecture.
arXiv Detail & Related papers (2023-07-18T07:11:14Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - A Construction Kit for Efficient Low Power Neural Network Accelerator
Designs [11.807678100385164]
This work provides a survey of neural network accelerator optimization approaches that have been used in recent works.
It presents the list of optimizations and their quantitative effects as a construction kit, allowing to assess the design choices for each building block separately.
arXiv Detail & Related papers (2021-06-24T07:53:56Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - Automated Design Space Exploration for optimised Deployment of DNN on
Arm Cortex-A CPUs [13.628734116014819]
Deep learning on embedded devices has prompted the development of numerous methods to optimise the deployment of deep neural networks (DNN)
There is a lack of research on cross-level optimisation as the space of approaches becomes too large to test and obtain a globally optimised solution.
We present a set of results for state-of-the-art DNNs on a range of Arm Cortex-A CPU platforms achieving up to 4x improvement in performance and over 2x reduction in memory.
arXiv Detail & Related papers (2020-06-09T11:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.