AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch
- URL: http://arxiv.org/abs/2203.04071v1
- Date: Tue, 8 Mar 2022 13:31:16 GMT
- Title: AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch
- Authors: Dimitrios Danopoulos, Georgios Zervakis, Kostas Siozios, Dimitrios
Soudris, J\"org Henkel
- Abstract summary: We present AdaPT, a fast emulation framework that extends PyTorch to support approximate inference and approximation-aware retraining.
We evaluate the framework on several DNN models and application fields including CNNs, LSTMs, and GANs for a number of approximate multipliers with distinct bitwidth values.
The results show substantial error recovery from approximate re-training and reduced inference time up to 53.9x with respect to the baseline approximate implementation.
- Score: 4.445835362642506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current state-of-the-art employs approximate multipliers to address the
highly increased power demands of DNN accelerators. However, evaluating the
accuracy of approximate DNNs is cumbersome due to the lack of adequate support
for approximate arithmetic in DNN frameworks. We address this inefficiency by
presenting AdaPT, a fast emulation framework that extends PyTorch to support
approximate inference as well as approximation-aware retraining. AdaPT can be
seamlessly deployed and is compatible with the most DNNs. We evaluate the
framework on several DNN models and application fields including CNNs, LSTMs,
and GANs for a number of approximate multipliers with distinct bitwidth values.
The results show substantial error recovery from approximate re-training and
reduced inference time up to 53.9x with respect to the baseline approximate
implementation.
Related papers
- Scalable Subsampling Inference for Deep Neural Networks [0.0]
A non-asymptotic error bound has been developed to measure the performance of the fully connected DNN estimator.
A non-random subsampling technique--scalable subsampling--is applied to construct a subagged' DNN estimator.
The proposed confidence/prediction intervals appear to work well in finite samples.
arXiv Detail & Related papers (2024-05-14T02:11:38Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - Automated machine learning for borehole resistivity measurements [0.0]
Deep neural networks (DNNs) offer a real-time solution for the inversion of borehole resistivity measurements.
It is possible to use extremely large DNNs to approximate the operators, but it demands a considerable training time.
In this work, we propose a scoring function that accounts for the accuracy and size of the DNNs.
arXiv Detail & Related papers (2022-07-20T12:27:22Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Hardware Approximate Techniques for Deep Neural Network Accelerators: A
Survey [4.856755747052137]
Deep Neural Networks (DNNs) are very popular because of their high performance in various cognitive tasks in Machine Learning (ML)
Recent advancements in DNNs have brought beyond human accuracy in many tasks, but at the cost of high computational complexity.
This article provides a comprehensive survey and analysis of hardware approximation techniques for DNN accelerators.
arXiv Detail & Related papers (2022-03-16T16:33:13Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Positive/Negative Approximate Multipliers for DNN Accelerators [3.1921317895626493]
We present a filter-oriented approximation method to map the weights to the appropriate modes of the approximate multiplier.
Our approach achieves 18.33% energy gains on average across 7 NNs on 4 different datasets for a maximum accuracy drop of only 1%.
arXiv Detail & Related papers (2021-07-20T09:36:24Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training [2.5025363034899732]
We present a novel approach to add the training ability to a baseline DNN accelerator (inference only) by splitting the SGD algorithm into simple computational elements.
Based on this approach we propose TaxoNN, a light-weight accelerator for DNN training.
Our experimental results show that TaxoNN delivers, on average, 0.97% higher misclassification rate compared to a full-precision implementation.
arXiv Detail & Related papers (2020-10-11T09:04:19Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.