CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
- URL: http://arxiv.org/abs/2207.10345v1
- Date: Thu, 21 Jul 2022 07:50:50 GMT
- Title: CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
- Authors: Cheeun Hong, Sungyong Baik, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
- Abstract summary: We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
- Score: 55.50793823060282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite breakthrough advances in image super-resolution (SR) with
convolutional neural networks (CNNs), SR has yet to enjoy ubiquitous
applications due to the high computational complexity of SR networks.
Quantization is one of the promising approaches to solve this problem. However,
existing methods fail to quantize SR models with a bit-width lower than 8 bits,
suffering from severe accuracy loss due to fixed bit-width quantization applied
everywhere. In this work, to achieve high average bit-reduction with less
accuracy loss, we propose a novel Content-Aware Dynamic Quantization (CADyQ)
method for SR networks that allocates optimal bits to local regions and layers
adaptively based on the local contents of an input image. To this end, a
trainable bit selector module is introduced to determine the proper bit-width
and quantization level for each layer and a given local image patch. This
module is governed by the quantization sensitivity that is estimated by using
both the average magnitude of image gradient of the patch and the standard
deviation of the input feature of the layer. The proposed quantization pipeline
has been tested on various SR networks and evaluated on several standard
benchmarks extensively. Significant reduction in computational complexity and
the elevated restoration accuracy clearly demonstrate the effectiveness of the
proposed CADyQ framework for SR. Codes are available at
https://github.com/Cheeun/CADyQ.
Related papers
- PassionSR: Post-Training Quantization with Adaptive Scale in One-Step Diffusion based Image Super-Resolution [87.89013794655207]
Diffusion-based image super-resolution (SR) models have shown superior performance at the cost of multiple denoising steps.
We propose a novel post-training quantization approach with adaptive scale in one-step diffusion (OSD) image SR, PassionSR.
Our PassionSR achieves significant advantages over recent leading low-bit quantization methods for image SR.
arXiv Detail & Related papers (2024-11-26T04:49:42Z) - AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution [53.23803932357899]
We introduce the first on-the-fly adaptive quantization framework that accelerates the processing time from hours to seconds.
We achieve competitive performance with the previous adaptive quantization methods, while the processing time is accelerated by x2000.
arXiv Detail & Related papers (2024-04-04T08:37:27Z) - RefQSR: Reference-based Quantization for Image Super-Resolution Networks [14.428652358882978]
Single image super-resolution aims to reconstruct a high-resolution image from its low-resolution observation.
Deep learning-based SISR models show high performance at the expense of increased computational costs.
We introduce a novel method called RefQSR that applies high-bit quantization to several representative patches and uses them as references for low-bit quantization of the rest of the patches in an image.
arXiv Detail & Related papers (2024-04-02T06:49:38Z) - Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks [53.23803932357899]
quantization leads to accuracy loss in image super-resolution (SR) networks.
Existing works address this distribution mismatch problem by dynamically adapting quantization ranges during test time.
We propose a new quantization-aware training scheme that effectively Overcomes the Distribution Mismatch problem in SR networks.
arXiv Detail & Related papers (2023-07-25T08:50:01Z) - Lightweight Stepless Super-Resolution of Remote Sensing Images via
Saliency-Aware Dynamic Routing Strategy [15.587621728422414]
Deep learning algorithms have greatly improved the performance of remote sensing image (RSI) super-resolution (SR)
However, increasing network depth and parameters cause a huge burden of computing and storage.
We propose a saliency-aware dynamic routing network (SalDRN) for lightweight and stepless SR of RSIs.
arXiv Detail & Related papers (2022-10-14T07:49:03Z) - Post-training Quantization for Neural Networks with Provable Guarantees [9.58246628652846]
We modify a post-training neural-network quantization method, GPFQ, that is based on a greedy path-following mechanism.
We prove that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights.
arXiv Detail & Related papers (2022-01-26T18:47:38Z) - Improving Super-Resolution Performance using Meta-Attention Layers [17.870338228921327]
Convolutional Neural Networks (CNNs) have achieved impressive results across many super-resolution (SR) and image restoration tasks.
Ill-posed nature of SR can make it difficult to accurately super-resolve an image which has undergone multiple different degradations.
We introduce meta-attention, a mechanism which allows any SR CNN to exploit the information available in relevant degradation parameters.
arXiv Detail & Related papers (2021-10-27T09:20:21Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.