A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
Scene Classification
- URL: http://arxiv.org/abs/2203.15751v1
- Date: Tue, 29 Mar 2022 17:00:06 GMT
- Title: A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
Scene Classification
- Authors: Arshdeep Singh, Mark D. Plumbley
- Abstract summary: We present a method to develop low-complexity convolutional neural networks (CNNs) for acoustic scene classification (ASC)
We propose a passive filter pruning framework, where a few convolutional filters from the CNNs are eliminated to yield compressed CNNs.
The proposed method is simple, reduces computations per inference by 27%, with 25% fewer parameters, with less than 1% drop in accuracy.
- Score: 23.661189257759535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method to develop low-complexity convolutional neural networks
(CNNs) for acoustic scene classification (ASC). The large size and high
computational complexity of typical CNNs is a bottleneck for their deployment
on resource-constrained devices. We propose a passive filter pruning framework,
where a few convolutional filters from the CNNs are eliminated to yield
compressed CNNs. Our hypothesis is that similar filters produce similar
responses and give redundant information allowing such filters to be eliminated
from the network. To identify similar filters, a cosine distance based greedy
algorithm is proposed. A fine-tuning process is then performed to regain much
of the performance lost due to filter elimination. To perform efficient
fine-tuning, we analyze how the performance varies as the number of fine-tuning
training examples changes. An experimental evaluation of the proposed framework
is performed on the publicly available DCASE 2021 Task 1A baseline network
trained for ASC. The proposed method is simple, reduces computations per
inference by 27%, with 25% fewer parameters, with less than 1% drop in
accuracy.
Related papers
- Structured Network Pruning by Measuring Filter-wise Interactions [6.037167142826297]
We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
arXiv Detail & Related papers (2023-07-03T05:26:05Z) - Compressing audio CNNs with graph centrality based filter pruning [20.028643659869573]
Convolutional neural networks (CNNs) are commonplace in high-performing solutions to many real-world problems.
CNNs have many parameters and filters, with some having a larger impact on the performance than others.
We propose a pruning framework that eliminates filters with the highest "commonality"
arXiv Detail & Related papers (2023-05-05T09:38:05Z) - Efficient CNNs via Passive Filter Pruning [23.661189257759535]
Convolutional neural networks (CNNs) have shown state-of-the-art performance in various applications.
CNNs are resource-hungry due to their requirement of high computational complexity and memory storage.
Recent efforts toward achieving computational efficiency in CNNs involve filter pruning methods.
arXiv Detail & Related papers (2023-04-05T09:19:19Z) - Efficient Similarity-based Passive Filter Pruning for Compressing CNNs [23.661189257759535]
Convolution neural networks (CNNs) have shown great success in various applications.
computational complexity and memory storage of CNNs is a bottleneck for their deployment on resource-constrained devices.
Recent efforts towards reducing the computation cost and the memory overhead of CNNs involve similarity-based passive filter pruning methods.
arXiv Detail & Related papers (2022-10-27T09:57:47Z) - Simple Pooling Front-ends For Efficient Audio Classification [56.59107110017436]
We show that eliminating the temporal redundancy in the input audio features could be an effective approach for efficient audio classification.
We propose a family of simple pooling front-ends (SimPFs) which use simple non-parametric pooling operations to reduce the redundant information.
SimPFs can achieve a reduction in more than half of the number of floating point operations for off-the-shelf audio neural networks.
arXiv Detail & Related papers (2022-10-03T14:00:41Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z) - Filter Grafting for Deep Neural Networks [71.39169475500324]
Filter grafting aims to improve the representation capability of Deep Neural Networks (DNNs)
We develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.
For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset.
arXiv Detail & Related papers (2020-01-15T03:18:57Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.