Image Complexity Guided Network Compression for Biomedical Image
Segmentation
- URL: http://arxiv.org/abs/2107.02927v1
- Date: Tue, 6 Jul 2021 22:28:10 GMT
- Title: Image Complexity Guided Network Compression for Biomedical Image
Segmentation
- Authors: Suraj Mishra, Danny Z. Chen, X. Sharon Hu
- Abstract summary: We propose an image complexity-guided network compression technique for biomedical image segmentation.
We map the dataset complexity to the target network accuracy degradation caused by compression.
The mapping is used to determine the convolutional layer-wise multiplicative factor for generating a compressed network.
- Score: 5.926887379656135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compression is a standard procedure for making convolutional neural networks
(CNNs) adhere to some specific computing resource constraints. However,
searching for a compressed architecture typically involves a series of
time-consuming training/validation experiments to determine a good compromise
between network size and performance accuracy. To address this, we propose an
image complexity-guided network compression technique for biomedical image
segmentation. Given any resource constraints, our framework utilizes data
complexity and network architecture to quickly estimate a compressed model
which does not require network training. Specifically, we map the dataset
complexity to the target network accuracy degradation caused by compression.
Such mapping enables us to predict the final accuracy for different network
sizes, based on the computed dataset complexity. Thus, one may choose a
solution that meets both the network size and segmentation accuracy
requirements. Finally, the mapping is used to determine the convolutional
layer-wise multiplicative factor for generating a compressed network. We
conduct experiments using 5 datasets, employing 3 commonly-used CNN
architectures for biomedical image segmentation as representative networks. Our
proposed framework is shown to be effective for generating compressed
segmentation networks, retaining up to $\approx 95\%$ of the full-sized network
segmentation accuracy, and at the same time, utilizing $\approx 32x$ fewer
network trainable weights (average reduction) of the full-sized networks.
Related papers
- BRAU-Net++: U-Shaped Hybrid CNN-Transformer Network for Medical Image Segmentation [11.986549780782724]
We propose a hybrid yet effective CNN-Transformer network, named BRAU-Net++, for an accurate medical image segmentation task.
Specifically, BRAU-Net++ uses bi-level routing attention as the core building block to design our u-shaped encoder-decoder structure.
Our proposed approach surpasses other state-of-the-art methods including its baseline: BRAU-Net.
arXiv Detail & Related papers (2024-01-01T10:49:09Z) - Fast Conditional Network Compression Using Bayesian HyperNetworks [54.06346724244786]
We introduce a conditional compression problem and propose a fast framework for tackling it.
The problem is how to quickly compress a pretrained large neural network into optimal smaller networks given target contexts.
Our methods can quickly generate compressed networks with significantly smaller sizes than baseline methods.
arXiv Detail & Related papers (2022-05-13T00:28:35Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Towards Bi-directional Skip Connections in Encoder-Decoder Architectures
and Beyond [95.46272735589648]
We propose backward skip connections that bring decoded features back to the encoder.
Our design can be jointly adopted with forward skip connections in any encoder-decoder architecture.
We propose a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS, to search for the best multi-scale skip connections.
arXiv Detail & Related papers (2022-03-11T01:38:52Z) - Exploring Structural Sparsity in Neural Image Compression [14.106763725475469]
We propose a plug-in adaptive binary channel masking(ABCM) to judge the importance of each convolution channel and introduce sparsity during training.
During inference, the unimportant channels are pruned to obtain slimmer network and less computation.
Experiment results show that up to 7x computation reduction and 3x acceleration can be achieved with negligible performance drop.
arXiv Detail & Related papers (2022-02-09T17:46:49Z) - Leveraging Image Complexity in Macro-Level Neural Network Design for
Medical Image Segmentation [3.974175960216864]
We show that image complexity can be used as a guideline in choosing what is best for a given dataset.
For high-complexity datasets, a shallow network running on the original images may yield better segmentation results than a deep network running on downsampled images.
arXiv Detail & Related papers (2021-12-21T09:49:47Z) - PocketNet: A Smaller Neural Network for 3D Medical Image Segmentation [0.0]
We derive a new CNN architecture called PocketNet that achieves comparable segmentation results to conventional CNNs while using less than 3% of the number of parameters.
We show that PocketNet achieves comparable segmentation results to conventional CNNs while using less than 3% of the number of parameters.
arXiv Detail & Related papers (2021-04-21T20:10:30Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Pruning and Quantization for Deep Neural Network Acceleration: A Survey [2.805723049889524]
Deep neural networks have been applied in many applications exhibiting extraordinary abilities in the field of computer vision.
Complex network architectures challenge efficient real-time deployment and require significant computation resources and energy costs.
This paper provides a survey on two types of network compression: pruning and quantization.
arXiv Detail & Related papers (2021-01-24T08:21:04Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.