Neural Network Pruning for Real-time Polyp Segmentation
- URL: http://arxiv.org/abs/2306.13203v1
- Date: Thu, 22 Jun 2023 21:03:50 GMT
- Title: Neural Network Pruning for Real-time Polyp Segmentation
- Authors: Suman Sapkota, Pranav Poudel, Sudarshan Regmi, Bibek Panthi, Binod
Bhattarai
- Abstract summary: We show an application of neural network pruning in polyp segmentation.
We compute the importance score of convolutional filters and remove the filters having the least scores, which to some value of pruning does not degrade the performance.
- Score: 8.08470060885395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer-assisted treatment has emerged as a viable application of medical
imaging, owing to the efficacy of deep learning models. Real-time inference
speed remains a key requirement for such applications to help medical
personnel. Even though there generally exists a trade-off between performance
and model size, impressive efforts have been made to retain near-original
performance by compromising model size. Neural network pruning has emerged as
an exciting area that aims to eliminate redundant parameters to make the
inference faster. In this study, we show an application of neural network
pruning in polyp segmentation. We compute the importance score of convolutional
filters and remove the filters having the least scores, which to some value of
pruning does not degrade the performance. For computing the importance score,
we use the Taylor First Order (TaylorFO) approximation of the change in network
output for the removal of certain filters. Specifically, we employ a
gradient-normalized backpropagation for the computation of the importance
score. Through experiments in the polyp datasets, we validate that our approach
can significantly reduce the parameter count and FLOPs retaining similar
performance.
Related papers
- RL-Pruner: Structured Pruning Using Reinforcement Learning for CNN Compression and Acceleration [0.0]
We propose RL-Pruner, which uses reinforcement learning to learn the optimal pruning distribution.
RL-Pruner can automatically extract dependencies between filters in the input model and perform pruning, without requiring model-specific pruning implementations.
arXiv Detail & Related papers (2024-11-10T13:35:10Z) - Class-Aware Pruning for Efficient Neural Networks [5.918784236241883]
Pruning has been introduced to reduce the computational cost in executing deep neural networks (DNNs)
In this paper, we propose a class-aware pruning technique to compress DNNs.
Experimental results confirm that this class-aware pruning technique can significantly reduce the number of weights and FLOPs.
arXiv Detail & Related papers (2023-12-10T13:07:54Z) - Accelerating Convolutional Neural Network Pruning via Spatial Aura
Entropy [0.0]
pruning is a popular technique to reduce the computational complexity and memory footprint of Convolutional Neural Network (CNN) models.
Existing methods for MI computation suffer from high computational cost and sensitivity to noise, leading to suboptimal pruning performance.
We propose a novel method to improve MI computation for CNN pruning, using the spatial aura entropy.
arXiv Detail & Related papers (2023-12-08T09:43:49Z) - Pruning Convolutional Filters via Reinforcement Learning with Entropy
Minimization [0.0]
We introduce a novel information-theoretic reward function which minimizes the spatial entropy of convolutional activations.
Our method shows that there is another possibility to preserve accuracy without the need to directly optimize it in the agent's reward function.
arXiv Detail & Related papers (2023-12-08T09:34:57Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
Accuracy [42.15969584135412]
Neural network pruning is a popular technique used to reduce the inference costs of modern networks.
We evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well.
We find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks.
arXiv Detail & Related papers (2021-03-04T13:22:16Z) - Implicit Under-Parameterization Inhibits Data-Efficient Deep
Reinforcement Learning [97.28695683236981]
More gradient updates decrease the expressivity of the current value network.
We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings.
arXiv Detail & Related papers (2020-10-27T17:55:16Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.