A Dynamic Residual Self-Attention Network for Lightweight Single Image
Super-Resolution
- URL: http://arxiv.org/abs/2112.04488v1
- Date: Wed, 8 Dec 2021 06:41:21 GMT
- Title: A Dynamic Residual Self-Attention Network for Lightweight Single Image
Super-Resolution
- Authors: Karam Park, Jae Woong Soh, Nam Ik Cho
- Abstract summary: We propose a dynamic residual self-attention network (DRSAN) for lightweight single-image super-resolution (SISR)
DRSAN has dynamic residual connections based on dynamic residual attention (DRA), which adaptively changes its structure according to input statistics.
We also propose a residual self-attention (RSA) module to further boost the performance, which produces 3-dimensional attention maps without additional parameters.
- Score: 17.094665593472214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods have shown outstanding performance in many
applications, including single-image super-resolution (SISR). With residual
connection architecture, deeply stacked convolutional neural networks provide a
substantial performance boost for SISR, but their huge parameters and
computational loads are impractical for real-world applications. Thus,
designing lightweight models with acceptable performance is one of the major
tasks in current SISR research. The objective of lightweight network design is
to balance a computational load and reconstruction performance. Most of the
previous methods have manually designed complex and predefined fixed
structures, which generally required a large number of experiments and lacked
flexibility in the diversity of input image statistics. In this paper, we
propose a dynamic residual self-attention network (DRSAN) for lightweight SISR,
while focusing on the automated design of residual connections between building
blocks. The proposed DRSAN has dynamic residual connections based on dynamic
residual attention (DRA), which adaptively changes its structure according to
input statistics. Specifically, we propose a dynamic residual module that
explicitly models the DRA by finding the interrelation between residual paths
and input image statistics, as well as assigning proper weights to each
residual path. We also propose a residual self-attention (RSA) module to
further boost the performance, which produces 3-dimensional attention maps
without additional parameters by cooperating with residual structures. The
proposed dynamic scheme, exploiting the combination of DRA and RSA, shows an
efficient trade-off between computational complexity and network performance.
Experimental results show that the DRSAN performs better than or comparable to
existing state-of-the-art lightweight models for SISR.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated
Convolution for Image Compressive Sensing (CS) [0.0]
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods.
We develop an interpretable and concise neural network model for reconstructing natural images using CS.
The model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods.
arXiv Detail & Related papers (2024-01-05T16:25:58Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Lightweight Image Super-Resolution with Hierarchical and Differentiable
Neural Architecture Search [38.83764580480486]
Single Image Super-Resolution (SISR) tasks have achieved significant performance with deep neural networks.
We propose a novel differentiable Neural Architecture Search (NAS) approach on both the cell-level and network-level to search for lightweight SISR models.
arXiv Detail & Related papers (2021-05-09T13:30:16Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.