DNNShifter: An Efficient DNN Pruning System for Edge Computing
- URL: http://arxiv.org/abs/2309.06973v1
- Date: Wed, 13 Sep 2023 14:05:50 GMT
- Title: DNNShifter: An Efficient DNN Pruning System for Edge Computing
- Authors: Bailey J. Eccles, Philip Rodgers, Peter Kilpatrick, Ivor Spence,
Blesson Varghese
- Abstract summary: Deep neural networks (DNNs) underpin many machine learning applications.
Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint.
This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources.
Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases.
Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system
- Score: 1.853502789996996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) underpin many machine learning applications.
Production quality DNN models achieve high inference accuracy by training
millions of DNN parameters which has a significant resource footprint. This
presents a challenge for resources operating at the extreme edge of the
network, such as mobile and embedded devices that have limited computational
and memory resources. To address this, models are pruned to create lightweight,
more suitable variants for these devices. Existing pruning methods are unable
to provide similar quality models compared to their unpruned counterparts
without significant time costs and overheads or are limited to offline use
cases. Our work rapidly derives suitable model variants while maintaining the
accuracy of the original model. The model variants can be swapped quickly when
system and network conditions change to match workload demand. This paper
presents DNNShifter, an end-to-end DNN training, spatial pruning, and model
switching system that addresses the challenges mentioned above. At the heart of
DNNShifter is a novel methodology that prunes sparse models using structured
pruning. The pruned model variants generated by DNNShifter are smaller in size
and thus faster than dense and sparse model predecessors, making them suitable
for inference at the edge while retaining near similar accuracy as of the
original dense model. DNNShifter generates a portfolio of model variants that
can be swiftly interchanged depending on operational conditions. DNNShifter
produces pruned model variants up to 93x faster than conventional training
methods. Compared to sparse models, the pruned model variants are up to 5.14x
smaller and have a 1.67x inference latency speedup, with no compromise to
sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead
for switching models and up to 3.8x lower memory utilisation than existing
approaches.
Related papers
- Update Compression for Deep Neural Networks on the Edge [33.57905298104467]
An increasing number of AI applications involve the execution of deep neural networks (DNNs) on edge devices.
Many practical reasons motivate the need to update the DNN model on the edge device post-deployment.
We develop a simple approach based on matrix factorisation to compress the model update.
arXiv Detail & Related papers (2022-03-09T04:20:43Z) - Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks [20.374784902476318]
Pruning, as a method to introduce zeros to model weights, has shown to be an effective method to provide good trade-offs between model accuracy and computation efficiency.
Some modern processors are equipped with fast on-chip scratchpad memories and gather/scatter engines that perform indirect load and store operations on such memories.
In this work, we propose a set of novel sparse patterns, named gather-scatter (GS) patterns, to utilize the scratchpad memories and gather/scatter engines to speed up neural network inferences.
arXiv Detail & Related papers (2021-12-20T22:55:45Z) - LegoDNN: Block-grained Scaling of Deep Neural Networks for Mobile Vision [27.74191483754982]
We present LegoDNN, a block-grained scaling solution for running multi-DNN workloads in mobile vision systems.
LegoDNN guarantees short model training times by only extracting and training a small number of common blocks.
We show that LegoDNN provides 1,296x to 279,936x more options in model sizes without increasing training time.
arXiv Detail & Related papers (2021-12-18T06:04:03Z) - Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming
E2E ASR via Supernet [24.62661549442265]
We propose Omni-sparsity DNN, where a single neural network can be pruned to generate optimized model for a large range of model sizes.
Our results show great saving on training time and resources with similar or better accuracy on LibriSpeech compared to individually pruned models.
arXiv Detail & Related papers (2021-10-15T20:28:27Z) - Fully Spiking Variational Autoencoder [66.58310094608002]
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption.
In this study, we build a variational autoencoder (VAE) with SNN to enable image generation.
arXiv Detail & Related papers (2021-09-26T06:10:14Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network
Training [0.5219568203653523]
We develop a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model.
Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26$times$ less energy and offers up to 4$times$ speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
arXiv Detail & Related papers (2020-09-23T07:39:55Z) - An Image Enhancing Pattern-based Sparsity for Real-time Inference on
Mobile Devices [58.62801151916888]
We introduce a new sparsity dimension, namely pattern-based sparsity that comprises pattern and connectivity sparsity, and becoming both highly accurate and hardware friendly.
Our approach on the new pattern-based sparsity naturally fits into compiler optimization for highly efficient DNN execution on mobile platforms.
arXiv Detail & Related papers (2020-01-20T16:17:36Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.