CFDP: Common Frequency Domain Pruning
- URL: http://arxiv.org/abs/2306.04147v2
- Date: Tue, 31 Oct 2023 04:47:13 GMT
- Title: CFDP: Common Frequency Domain Pruning
- Authors: Samir Khaki, Weihan Luo
- Abstract summary: We introduce a novel end-to-end pipeline for model pruning via the frequency domain.
We have achieved state-of-the-art results on CIFAR-10 with GoogLeNet reaching an accuracy of 95.25%, that is, +0.2% from the original model.
In addition to notable performances, models produced via CFDP exhibit robustness to a variety of configurations.
- Score: 0.3021678014343889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the saying goes, sometimes less is more -- and when it comes to neural
networks, that couldn't be more true. Enter pruning, the art of selectively
trimming away unnecessary parts of a network to create a more streamlined,
efficient architecture. In this paper, we introduce a novel end-to-end pipeline
for model pruning via the frequency domain. This work aims to shed light on the
interoperability of intermediate model outputs and their significance beyond
the spatial domain. Our method, dubbed Common Frequency Domain Pruning (CFDP)
aims to extrapolate common frequency characteristics defined over the feature
maps to rank the individual channels of a layer based on their level of
importance in learning the representation. By harnessing the power of CFDP, we
have achieved state-of-the-art results on CIFAR-10 with GoogLeNet reaching an
accuracy of 95.25%, that is, +0.2% from the original model. We also outperform
all benchmarks and match the original model's performance on ImageNet, using
only 55% of the trainable parameters and 60% of the FLOPs. In addition to
notable performances, models produced via CFDP exhibit robustness to a variety
of configurations including pruning from untrained neural architectures, and
resistance to adversarial attacks. The implementation code can be found at
https://github.com/Skhaki18/CFDP.
Related papers
- Efficient Context Integration through Factorized Pyramidal Learning for
Ultra-Lightweight Semantic Segmentation [1.0499611180329804]
We propose a novel Factorized Pyramidal Learning (FPL) module to aggregate rich contextual information in an efficient manner.
We decompose the spatial pyramid into two stages which enables a simple and efficient feature fusion within the module to solve the notorious checkerboard effect.
Based on the FPL module and FIR unit, we propose an ultra-lightweight real-time network, called FPLNet, which achieves state-of-the-art accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-02-23T05:34:51Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - Multi-scale Attention U-Net (MsAUNet): A Modified U-Net Architecture for
Scene Segmentation [1.713291434132985]
We propose a novel multi-scale attention network for scene segmentation by using contextual information from an image.
This network can map local features with their global counterparts with improved accuracy and emphasize on discriminative image regions.
We have evaluated our model on two standard datasets named PascalVOC2012 and ADE20k.
arXiv Detail & Related papers (2020-09-15T08:03:41Z) - Fully Dynamic Inference with Deep Neural Networks [19.833242253397206]
Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance basis which layers or filters/channels are redundant and therefore should be skipped.
On the CIFAR-10 dataset, LC-Net results in up to 11.9$times$ fewer floating-point operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic inference methods.
On the ImageNet dataset, LC-Net achieves up to 1.4$times$ fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods.
arXiv Detail & Related papers (2020-07-29T23:17:48Z) - ResNeSt: Split-Attention Networks [86.25490825631763]
We present a modularized architecture, which applies the channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations.
Our model, named ResNeSt, outperforms EfficientNet in accuracy and latency trade-off on image classification.
arXiv Detail & Related papers (2020-04-19T20:40:31Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.