A Two-Stage Attentive Network for Single Image Super-Resolution
- URL: http://arxiv.org/abs/2104.10488v1
- Date: Wed, 21 Apr 2021 12:20:24 GMT
- Title: A Two-Stage Attentive Network for Single Image Super-Resolution
- Authors: Jiqing Zhang, Chengjiang Long, Yuxin Wang, Haiyin Piao, Haiyang Mei,
Xin Yang, Baocai Yin
- Abstract summary: We propose a two-stage attentive network (TSAN) for accurate SISR in a coarse-to-fine manner.
Specifically, we design a novel multi-context attentive block (MCAB) to make the network focus on more informative contextual features.
We present an essential refined attention block (RAB) which could explore useful cues in HR space for reconstructing fine-detailed HR image.
- Score: 34.450320969785935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep convolutional neural networks (CNNs) have been widely explored
in single image super-resolution (SISR) and contribute remarkable progress.
However, most of the existing CNNs-based SISR methods do not adequately explore
contextual information in the feature extraction stage and pay little attention
to the final high-resolution (HR) image reconstruction step, hence hindering
the desired SR performance. To address the above two issues, in this paper, we
propose a two-stage attentive network (TSAN) for accurate SISR in a
coarse-to-fine manner. Specifically, we design a novel multi-context attentive
block (MCAB) to make the network focus on more informative contextual features.
Moreover, we present an essential refined attention block (RAB) which could
explore useful cues in HR space for reconstructing fine-detailed HR image.
Extensive evaluations on four benchmark datasets demonstrate the efficacy of
our proposed TSAN in terms of quantitative metrics and visual effects. Code is
available at https://github.com/Jee-King/TSAN.
Related papers
- CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Interpreting Super-Resolution Networks with Local Attribution Maps [24.221989130005085]
Image super-resolution (SR) techniques have been developing rapidly, benefiting from the invention of deep networks and its successive breakthroughs.
It is acknowledged that deep learning and deep neural networks are difficult to interpret.
In this paper, we perform attribution analysis of SR networks, which aims at finding the input pixels that strongly influence the SR results.
arXiv Detail & Related papers (2020-11-22T15:11:00Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - MPRNet: Multi-Path Residual Network for Lightweight Image Super
Resolution [2.3576437999036473]
A novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR.
The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model.
arXiv Detail & Related papers (2020-11-09T17:11:15Z) - Interpretable Detail-Fidelity Attention Network for Single Image
Super-Resolution [89.1947690981471]
We propose a purposeful and interpretable detail-fidelity attention network to progressively process smoothes and details in divide-and-conquer manner.
Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference.
Experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T08:31:23Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z) - AdaptiveWeighted Attention Network with Camera Spectral Sensitivity
Prior for Spectral Reconstruction from RGB Images [22.26917280683572]
We propose a novel adaptive weighted attention network (AWAN) for spectral reconstruction.
AWCA and PSNL modules are developed to reallocate channel-wise feature responses.
In the NTIRE 2020 Spectral Reconstruction Challenge, our entries obtain the 1st ranking on the Clean track and the 3rd place on the Real World track.
arXiv Detail & Related papers (2020-05-19T09:21:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.