Archtree: on-the-fly tree-structured exploration for latency-aware
pruning of deep neural networks
- URL: http://arxiv.org/abs/2311.10549v1
- Date: Fri, 17 Nov 2023 14:24:12 GMT
- Title: Archtree: on-the-fly tree-structured exploration for latency-aware
pruning of deep neural networks
- Authors: R\'emi Ouazan Reboul, Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
- Abstract summary: Archtree is a novel method for latency-driven structured pruning of deep neural networks (DNNs)
It involves on-the-fly latency estimation on the target hardware, accounting for closer latencies as compared to the specified budget.
Empirical results show that Archtree better preserves the original model accuracy while better fitting the latency budget as compared to existing state-of-the-art methods.
- Score: 20.564198591600647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have become ubiquitous in addressing a number of
problems, particularly in computer vision. However, DNN inference is
computationally intensive, which can be prohibitive e.g. when considering edge
devices. To solve this problem, a popular solution is DNN pruning, and more so
structured pruning, where coherent computational blocks (e.g. channels for
convolutional networks) are removed: as an exhaustive search of the space of
pruned sub-models is intractable in practice, channels are typically removed
iteratively based on an importance estimation heuristic. Recently, promising
latency-aware pruning methods were proposed, where channels are removed until
the network reaches a target budget of wall-clock latency pre-emptively
estimated on specific hardware. In this paper, we present Archtree, a novel
method for latency-driven structured pruning of DNNs. Archtree explores
multiple candidate pruned sub-models in parallel in a tree-like fashion,
allowing for a better exploration of the search space. Furthermore, it involves
on-the-fly latency estimation on the target hardware, accounting for closer
latencies as compared to the specified budget. Empirical results on several DNN
architectures and target hardware show that Archtree better preserves the
original model accuracy while better fitting the latency budget as compared to
existing state-of-the-art methods.
Related papers
- Flexible Channel Dimensions for Differentiable Architecture Search [50.33956216274694]
We propose a novel differentiable neural architecture search method with an efficient dynamic channel allocation algorithm.
We show that the proposed framework is able to find DNN architectures that are equivalent to previous methods in task accuracy and inference latency.
arXiv Detail & Related papers (2023-06-13T15:21:38Z) - FSCNN: A Fast Sparse Convolution Neural Network Inference System [31.474696818171953]
Convolution neural networks (CNNs) have achieved remarkable success, but typically accompany high computation cost and numerous redundant weight parameters.
To reduce the FLOPs, structure pruning is a popular approach to remove the entire hidden structures via introducing coarse-grained sparsity.
We present an efficient convolution neural network inference system to accelerate its forward pass by utilizing the fine-grained sparsity of compressed CNNs.
arXiv Detail & Related papers (2022-12-17T06:44:58Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Architecture Aware Latency Constrained Sparse Neural Networks [35.50683537052815]
In this paper, we design an architecture aware latency constrained sparse framework to prune and accelerate CNN models.
We also propose a novel sparse convolution algorithm for efficient computation.
Our system-algorithm co-design framework can achieve much better frontier among network accuracy and latency on resource-constrained mobile devices.
arXiv Detail & Related papers (2021-09-01T03:41:31Z) - Deterministic Iteratively Built KD-Tree with KNN Search for Exact
Applications [2.7325238096808318]
K-Nearest Neighbors (KNN) search is a fundamental algorithm in artificial intelligence software with applications in robotics, and autonomous vehicles.
Similar to binary trees, kd-trees become unbalanced as new data is added in online applications which can lead to rapid degradation in search performance unless the tree is rebuilt.
We will present a "forest of interval kd-trees" which reduces the number of tree rebuilds, without compromising the exactness of query results.
arXiv Detail & Related papers (2021-06-07T17:09:22Z) - Spectral Pruning for Recurrent Neural Networks [0.0]
Pruning techniques for neural networks with a recurrent architecture, such as the recurrent neural network (RNN), are strongly desired for their application to edge-computing devices.
In this paper, we propose an appropriate pruning algorithm for RNNs inspired by "spectral pruning", and provide the generalization error bounds for compressed RNNs.
arXiv Detail & Related papers (2021-05-23T00:30:59Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.