Dynamic Resolution Network
- URL: http://arxiv.org/abs/2106.02898v1
- Date: Sat, 5 Jun 2021 13:48:33 GMT
- Title: Dynamic Resolution Network
- Authors: Mingjian Zhu, Kai Han, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong
Lan, Yunhe Wang
- Abstract summary: The redundancy on the input resolution of modern CNNs has not been fully investigated.
We propose a novel dynamic-resolution network (DRNet) in which the resolution is determined dynamically based on each input sample.
DRNet achieves similar performance with an about 34% reduction, while gains 1.4% accuracy increase with 10% reduction compared to the original ResNet-50 on ImageNet.
- Score: 40.64164953983429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks (CNNs) are often of sophisticated design
with numerous convolutional layers and learnable parameters for the accuracy
reason. To alleviate the expensive costs of deploying them on mobile devices,
recent works have made huge efforts for excavating redundancy in pre-defined
architectures. Nevertheless, the redundancy on the input resolution of modern
CNNs has not been fully investigated, i.e., the resolution of input image is
fixed. In this paper, we observe that the smallest resolution for accurately
predicting the given image is different using the same neural network. To this
end, we propose a novel dynamic-resolution network (DRNet) in which the
resolution is determined dynamically based on each input sample. Thus, a
resolution predictor with negligible computational costs is explored and
optimized jointly with the desired network. In practice, the predictor learns
the smallest resolution that can retain and even exceed the original
recognition accuracy for each image. During the inference, each input image
will be resized to its predicted resolution for minimizing the overall
computation burden. We then conduct extensive experiments on several benchmark
networks and datasets. The results show that our DRNet can be embedded in any
off-the-shelf network architecture to obtain a considerable reduction in
computational complexity. For instance, DRNet achieves similar performance with
an about 34% computation reduction, while gains 1.4% accuracy increase with 10%
computation reduction compared to the original ResNet-50 on ImageNet.
Related papers
- Leveraging Image Complexity in Macro-Level Neural Network Design for
Medical Image Segmentation [3.974175960216864]
We show that image complexity can be used as a guideline in choosing what is best for a given dataset.
For high-complexity datasets, a shallow network running on the original images may yield better segmentation results than a deep network running on downsampled images.
arXiv Detail & Related papers (2021-12-21T09:49:47Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Enhancing sensor resolution improves CNN accuracy given the same number
of parameters or FLOPS [53.10151901863263]
We show that it is almost always possible to modify a network such that it achieves higher accuracy at a higher input resolution while having the same number of parameters or/and FLOPS.
Preliminary empirical investigation over MNIST, Fashion MNIST, and CIFAR10 datasets demonstrates the efficiency of the proposed approach.
arXiv Detail & Related papers (2021-03-09T06:47:01Z) - Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in
Image Classification [46.885260723836865]
Deep convolutional neural networks (CNNs) generally improve when fueled with high resolution images.
Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification.
Our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs.
arXiv Detail & Related papers (2020-10-11T17:55:06Z) - Resolution Switchable Networks for Runtime Efficient Image Recognition [46.09537029831355]
We propose a general method to train a single convolutional neural network which is capable of switching image resolutions at inference.
Networks trained with the proposed method are named Resolution Switchable Networks (RS-Nets)
arXiv Detail & Related papers (2020-07-19T02:12:59Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - LSHR-Net: a hardware-friendly solution for high-resolution computational
imaging using a mixed-weights neural network [5.475867050068397]
We propose a novel hardware-friendly solution based on mixed-weights neural networks for computational imaging.
In particular, learned binary-weight sensing patterns are tailored to the sampling device.
Our method has been validated on benchmark datasets and achieved the state of the art reconstruction accuracy.
arXiv Detail & Related papers (2020-04-27T20:59:51Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.