Knapsack Pruning with Inner Distillation
- URL: http://arxiv.org/abs/2002.08258v3
- Date: Wed, 3 Jun 2020 10:09:33 GMT
- Title: Knapsack Pruning with Inner Distillation
- Authors: Yonathan Aflalo and Asaf Noy and Ming Lin and Itamar Friedman and Lihi
Zelnik
- Abstract summary: We propose a novel pruning method that optimize the final accuracy of the pruned network.
We prune the network channels while maintaining the high-level structure of the network.
Our method leads to state-of-the-art pruning results on ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones.
- Score: 11.04321604965426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network pruning reduces the computational cost of an
over-parameterized network to improve its efficiency. Popular methods vary from
$\ell_1$-norm sparsification to Neural Architecture Search (NAS). In this work,
we propose a novel pruning method that optimizes the final accuracy of the
pruned network and distills knowledge from the over-parameterized parent
network's inner layers. To enable this approach, we formulate the network
pruning as a Knapsack Problem which optimizes the trade-off between the
importance of neurons and their associated computational cost. Then we prune
the network channels while maintaining the high-level structure of the network.
The pruned network is fine-tuned under the supervision of the parent network
using its inner network knowledge, a technique we refer to as the Inner
Knowledge Distillation. Our method leads to state-of-the-art pruning results on
ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones. To prune complex
network structures such as convolutions with skip-links and depth-wise
convolutions, we propose a block grouping approach to cope with these
structures. Through this we produce compact architectures with the same FLOPs
as EfficientNet-B0 and MobileNetV3 but with higher accuracy, by $1\%$ and
$0.3\%$ respectively on ImageNet, and faster runtime on GPU.
Related papers
- A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - BCNet: Searching for Network Width with Bilaterally Coupled Network [56.14248440683152]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
Our method achieves state-of-the-art or competing performance over other baseline methods.
arXiv Detail & Related papers (2021-05-21T18:54:03Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Channel Planting for Deep Neural Networks using Knowledge Distillation [3.0165431987188245]
We present a novel incremental training algorithm for deep neural networks called planting.
Our planting can search the optimal network architecture with smaller number of parameters for improving the network performance.
We evaluate the effectiveness of the proposed method on different datasets such as CIFAR-10/100 and STL-10.
arXiv Detail & Related papers (2020-11-04T16:29:59Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - Growing Efficient Deep Networks by Structured Continuous Sparsification [34.7523496790944]
We develop an approach to growing deep network architectures over the course of training.
Our method can start from a small, simple seed architecture and dynamically grow and prune both layers and filters.
We achieve $49.7%$ inference FLOPs and $47.4%$ training FLOPs savings compared to a baseline ResNet-50 on ImageNet.
arXiv Detail & Related papers (2020-07-30T10:03:47Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.