CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization
- URL: http://arxiv.org/abs/2110.10921v2
- Date: Thu, 30 Mar 2023 05:05:23 GMT
- Title: CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization
- Authors: Wenzheng Hu, Zhengping Che, Ning Liu, Mingyang Li, Jian Tang,
Changshui Zhang, Jianqiang Wang
- Abstract summary: We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
- Score: 61.71504948770445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks are shown to be overkill with high
parametric and computational redundancy in many application scenarios, and an
increasing number of works have explored model pruning to obtain lightweight
and efficient networks. However, most existing pruning approaches are driven by
empirical heuristic and rarely consider the joint impact of channels, leading
to unguaranteed and suboptimal performance. In this paper, we propose a novel
channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to
reduce the computational burden and accelerate the model inference. Utilizing
class information from a few samples, CATRO measures the joint impact of
multiple channels by feature space discriminations and consolidates the
layer-wise impact of preserved channels. By formulating channel pruning as a
submodular set function maximization problem, CATRO solves it efficiently via a
two-stage greedy iterative optimization procedure. More importantly, we present
theoretical justifications on convergence of CATRO and performance of pruned
networks. Experimental results demonstrate that CATRO achieves higher accuracy
with similar computation cost or lower computation cost with similar accuracy
than other state-of-the-art channel pruning algorithms. In addition, because of
its class-aware property, CATRO is suitable to prune efficient networks
adaptively for various classification subtasks, enhancing handy deployment and
usage of deep networks in real-world applications.
Related papers
- Rewarded meta-pruning: Meta Learning with Rewards for Channel Pruning [19.978542231976636]
This paper proposes a novel method to reduce the parameters and FLOPs for computational efficiency in deep learning models.
We introduce accuracy and efficiency coefficients to control the trade-off between the accuracy of the network and its computing efficiency.
arXiv Detail & Related papers (2023-01-26T12:32:01Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z) - AutoPruning for Deep Neural Network with Dynamic Channel Masking [28.018077874687343]
We propose a learning based auto pruning algorithm for deep neural network.
A two objectives' problem that aims for the the weights and the best channels for each layer is first formulated.
An alternative optimization approach is then proposed to derive the optimal channel numbers and weights simultaneously.
arXiv Detail & Related papers (2020-10-22T20:12:46Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Gradual Channel Pruning while Training using Feature Relevance Scores
for Convolutional Neural Networks [6.534515590778012]
Pruning is one of the predominant approaches used for deep network compression.
We present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric.
We demonstrate the effectiveness of the proposed methodology on architectures such as VGG and ResNet.
arXiv Detail & Related papers (2020-02-23T17:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.