Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion
- URL: http://arxiv.org/abs/2107.06916v1
- Date: Wed, 14 Jul 2021 18:07:38 GMT
- Title: Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion
- Authors: Mingbao Lin, Rongrong Ji, Bohong Chen, Fei Chao, Jianzhuang Liu, Wei
Zeng, Yonghong Tian, Qi Tian
- Abstract summary: We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
- Score: 139.71852076031962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mainstream approach for filter pruning is usually either to force a
hard-coded importance estimation upon a computation-heavy pretrained model to
select "important" filters, or to impose a hyperparameter-sensitive sparse
constraint on the loss objective to regularize the network training. In this
paper, we present a novel filter pruning method, dubbed dynamic-coded filter
fusion (DCFF), to derive compact CNNs in a computation-economical and
regularization-free manner for efficient image classification. Each filter in
our DCFF is firstly given an inter-similarity distribution with a temperature
parameter as a filter proxy, on top of which, a fresh Kullback-Leibler
divergence based dynamic-coded criterion is proposed to evaluate the filter
importance. In contrast to simply keeping high-score filters in other methods,
we propose the concept of filter fusion, i.e., the weighted averages using the
assigned proxies, as our preserved filters. We obtain a one-hot
inter-similarity distribution as the temperature parameter approaches infinity.
Thus, the relative importance of each filter can vary along with the training
of the compact CNN, leading to dynamically changeable fused filters without
both the dependency on the pretrained model and the introduction of sparse
constraints. Extensive experiments on classification benchmarks demonstrate the
superiority of our DCFF over the compared counterparts. For example, our DCFF
derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while
reaching top-1 accuracy of 93.47% on CIFAR-10. A compact ResNet-50 is obtained
with 63.8% FLOPs and 58.6% parameter reductions, retaining 75.60% top-1
accuracy on ILSVRC-2012. Our code, narrower models and training logs are
available at https://github.com/lmbxmu/DCFF.
Related papers
- Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Pruning by Active Attention Manipulation [49.61707925611295]
Filter pruning of a CNN is typically achieved by applying discrete masks on the CNN's filter weights or activation maps, post-training.
Here, we present a new filter-importance-scoring concept named pruning by active attention manipulation (PAAM)
PAAM learns analog filter scores from the filter weights by optimizing a cost function regularized by an additive term in the scores.
arXiv Detail & Related papers (2022-10-20T09:17:02Z) - A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
Scene Classification [23.661189257759535]
We present a method to develop low-complexity convolutional neural networks (CNNs) for acoustic scene classification (ASC)
We propose a passive filter pruning framework, where a few convolutional filters from the CNNs are eliminated to yield compressed CNNs.
The proposed method is simple, reduces computations per inference by 27%, with 25% fewer parameters, with less than 1% drop in accuracy.
arXiv Detail & Related papers (2022-03-29T17:00:06Z) - Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters [151.2423480789271]
A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
arXiv Detail & Related papers (2022-02-15T04:53:24Z) - Network Compression via Central Filter [9.585818883354449]
We propose a novel filter pruning method, Central Filter (CF), which suggests a filter is approximately equal to a set of other filters after appropriate adjustments.
CF yields state-of-the-art performance on various benchmark networks and datasets.
arXiv Detail & Related papers (2021-12-10T12:51:04Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.