Efficient Image Super-Resolution using Vast-Receptive-Field Attention
- URL: http://arxiv.org/abs/2210.05960v1
- Date: Wed, 12 Oct 2022 07:01:00 GMT
- Title: Efficient Image Super-Resolution using Vast-Receptive-Field Attention
- Authors: Lin Zhou, Haoming Cai, Jinjin Gu, Zheyuan Li, Yingqi Liu, Xiangyu
Chen, Yu Qiao, Chao Dong
- Abstract summary: The attention mechanism plays a pivotal role in designing advanced super-resolution (SR) networks.
In this work, we design an efficient SR network by improving the attention mechanism.
We propose VapSR, the VAst-receptive-field Pixel attention network.
- Score: 49.87316814164699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The attention mechanism plays a pivotal role in designing advanced
super-resolution (SR) networks. In this work, we design an efficient SR network
by improving the attention mechanism. We start from a simple pixel attention
module and gradually modify it to achieve better super-resolution performance
with reduced parameters. The specific approaches include: (1) increasing the
receptive field of the attention branch, (2) replacing large dense convolution
kernels with depth-wise separable convolutions, and (3) introducing pixel
normalization. These approaches paint a clear evolutionary roadmap for the
design of attention mechanisms. Based on these observations, we propose VapSR,
the VAst-receptive-field Pixel attention network. Experiments demonstrate the
superior performance of VapSR. VapSR outperforms the present lightweight
networks with even fewer parameters. And the light version of VapSR can use
only 21.68% and 28.18% parameters of IMDB and RFDN to achieve similar
performances to those networks. The code and models are available at
url{https://github.com/zhoumumu/VapSR.
Related papers
- Swift Parameter-free Attention Network for Efficient Super-Resolution [8.365929625909509]
Single Image Super-Resolution is a crucial task in low-level computer vision.
We propose the Swift.
parameter-free Attention Network (SPAN), which balances parameter count, inference speed, and image quality.
We evaluate SPAN on multiple benchmarks, showing that it outperforms existing efficient super-resolution models in terms of both image quality and inference speed.
arXiv Detail & Related papers (2023-11-21T18:30:40Z) - Incorporating Transformer Designs into Convolutions for Lightweight
Image Super-Resolution [46.32359056424278]
Large convolutional kernels have become popular in designing convolutional neural networks.
The increase in kernel size also leads to a quadratic growth in the number of parameters, resulting in heavy computation and memory requirements.
We propose a neighborhood attention (NA) module that upgrades the standard convolution with a self-attention mechanism.
Building upon the NA module, we propose a lightweight single image super-resolution (SISR) network named TCSR.
arXiv Detail & Related papers (2023-03-25T01:32:18Z) - Parameter-Free Channel Attention for Image Classification and
Super-Resolution [31.428547682263947]
The channel attention mechanism is a useful technique widely employed in deep convolutional neural networks to boost the performance for image processing tasks.
We propose a.
‘Free Channel Attention (PFCA) module to boost the performance of popular image classification and image super-resolution networks.
Experiments on CIFAR-100, ImageNet, and DIV2K validate that our PFCA module improves the performance of ResNet on image classification and improves the performance of MSRResNet on image super-resolution tasks.
arXiv Detail & Related papers (2023-03-20T12:08:58Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Attention in Attention Network for Image Super-Resolution [18.2279472158217]
We quantify and visualize the static attention mechanisms and show that not all attention modules are equally beneficial.
We propose attention in attention network (A$2$N) for highly accurate image SR.
Our model could achieve superior trade-off performances comparing with state-of-the-art lightweight networks.
arXiv Detail & Related papers (2021-04-19T17:59:06Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.