Image Superresolution using Scale-Recurrent Dense Network
- URL: http://arxiv.org/abs/2201.11998v1
- Date: Fri, 28 Jan 2022 09:18:43 GMT
- Title: Image Superresolution using Scale-Recurrent Dense Network
- Authors: Kuldeep Purohit, Srimanta Mandal, A. N. Rajagopalan
- Abstract summary: Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
- Score: 30.75380029218373
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent advances in the design of convolutional neural network (CNN) have
yielded significant improvements in the performance of image super-resolution
(SR). The boost in performance can be attributed to the presence of residual or
dense connections within the intermediate layers of these networks. The
efficient combination of such connections can reduce the number of parameters
drastically while maintaining the restoration quality. In this paper, we
propose a scale recurrent SR architecture built upon units containing series of
dense connections within a residual block (Residual Dense Blocks (RDBs)) that
allow extraction of abundant local features from the image. Our scale recurrent
design delivers competitive performance for higher scale factors while being
parametrically more efficient as compared to current state-of-the-art
approaches. To further improve the performance of our network, we employ
multiple residual connections in intermediate layers (referred to as
Multi-Residual Dense Blocks), which improves gradient propagation in existing
layers. Recent works have discovered that conventional loss functions can guide
a network to produce results which have high PSNRs but are perceptually
inferior. We mitigate this issue by utilizing a Generative Adversarial Network
(GAN) based framework and deep feature (VGG) losses to train our network. We
experimentally demonstrate that different weighted combinations of the VGG loss
and the adversarial loss enable our network outputs to traverse along the
perception-distortion curve. The proposed networks perform favorably against
existing methods, both perceptually and objectively (PSNR-based) with fewer
parameters.
Related papers
- Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Deep Networks for Image and Video Super-Resolution [30.75380029218373]
Single image super-resolution (SISR) is built using efficient convolutional units we refer to as mixed-dense connection blocks (MDCB)
We train two versions of our network to enhance complementary image qualities using different loss configurations.
We further employ our network for super-resolution task, where our network learns to aggregate information from multiple frames and maintain-temporal consistency.
arXiv Detail & Related papers (2022-01-28T09:15:21Z) - DDCNet: Deep Dilated Convolutional Neural Network for Dense Prediction [0.0]
A receptive field (ERF) and a higher resolution of spatial features within a network are essential for providing higher-resolution dense estimates.
We present a systemic approach to design network architectures that can provide a larger receptive field while maintaining a higher spatial feature resolution.
arXiv Detail & Related papers (2021-07-09T23:15:34Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Multi-wavelet residual dense convolutional neural network for image
denoising [2.500475462213752]
We use the short-term residual learning method to improve the performance and robustness of networks for image denoising tasks.
Here, we choose a multi-wavelet convolutional neural network (MWCNN) as the backbone, and insert residual dense blocks (RDBs) in its each layer.
Compared with other RDB-based networks, it can extract more features of the object from adjacent layers, preserve the large RF, and boost the computing efficiency.
arXiv Detail & Related papers (2020-02-19T17:21:37Z) - Mixed-Precision Quantized Neural Network with Progressively Decreasing
Bitwidth For Image Classification and Object Detection [21.48875255723581]
A mixed-precision quantized neural network with progressively ecreasing bitwidth is proposed to improve the trade-off between accuracy and compression.
Experiments on typical network architectures and benchmark datasets demonstrate that the proposed method could achieve better or comparable results.
arXiv Detail & Related papers (2019-12-29T14:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.